00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1750 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3011 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.203 > git --version # 'git version 2.39.2' 00:00:00.203 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.203 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.203 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.794 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.807 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.818 Checking out Revision 6201031def5bfb7f90a861bc162998684798607e (FETCH_HEAD) 00:00:03.818 > git config core.sparsecheckout # timeout=10 00:00:03.830 > git read-tree -mu HEAD # timeout=10 00:00:03.845 > git checkout -f 6201031def5bfb7f90a861bc162998684798607e # timeout=5 00:00:03.866 Commit message: "scripts/kid: Add issue 3354" 00:00:03.866 > git rev-list --no-walk 6201031def5bfb7f90a861bc162998684798607e # timeout=10 00:00:03.971 [Pipeline] Start of Pipeline 00:00:03.983 [Pipeline] library 00:00:03.984 Loading library shm_lib@master 00:00:03.984 Library shm_lib@master is cached. Copying from home. 00:00:04.001 [Pipeline] node 00:00:04.013 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.015 [Pipeline] { 00:00:04.023 [Pipeline] catchError 00:00:04.024 [Pipeline] { 00:00:04.039 [Pipeline] wrap 00:00:04.051 [Pipeline] { 00:00:04.061 [Pipeline] stage 00:00:04.062 [Pipeline] { (Prologue) 00:00:04.079 [Pipeline] echo 00:00:04.080 Node: VM-host-SM0 00:00:04.084 [Pipeline] cleanWs 00:00:04.092 [WS-CLEANUP] Deleting project workspace... 00:00:04.092 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.097 [WS-CLEANUP] done 00:00:04.295 [Pipeline] setCustomBuildProperty 00:00:04.360 [Pipeline] nodesByLabel 00:00:04.361 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.369 [Pipeline] httpRequest 00:00:04.372 HttpMethod: GET 00:00:04.373 URL: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.380 Sending request to url: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.383 Response Code: HTTP/1.1 200 OK 00:00:04.383 Success: Status code 200 is in the accepted range: 200,404 00:00:04.383 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.929 [Pipeline] sh 00:00:05.210 + tar --no-same-owner -xf jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:05.226 [Pipeline] httpRequest 00:00:05.231 HttpMethod: GET 00:00:05.231 URL: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.232 Sending request to url: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.234 Response Code: HTTP/1.1 200 OK 00:00:05.235 Success: Status code 200 is in the accepted range: 200,404 00:00:05.235 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:24.632 [Pipeline] sh 00:00:24.914 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:27.457 [Pipeline] sh 00:00:27.737 + git -C spdk log --oneline -n5 00:00:27.737 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:00:27.737 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:00:27.737 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:00:27.737 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:00:27.737 3b33f4333 test/nvme/cuse: Fix typo 00:00:27.755 [Pipeline] writeFile 00:00:27.770 [Pipeline] sh 00:00:28.050 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.062 [Pipeline] sh 00:00:28.345 + cat autorun-spdk.conf 00:00:28.345 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.345 SPDK_TEST_NVMF=1 00:00:28.345 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.345 SPDK_TEST_VFIOUSER=1 00:00:28.345 SPDK_TEST_USDT=1 00:00:28.345 SPDK_RUN_UBSAN=1 00:00:28.345 SPDK_TEST_NVMF_MDNS=1 00:00:28.345 NET_TYPE=virt 00:00:28.345 SPDK_JSONRPC_GO_CLIENT=1 00:00:28.345 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.351 RUN_NIGHTLY=1 00:00:28.355 [Pipeline] } 00:00:28.382 [Pipeline] // stage 00:00:28.409 [Pipeline] stage 00:00:28.410 [Pipeline] { (Run VM) 00:00:28.425 [Pipeline] sh 00:00:28.708 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.708 + echo 'Start stage prepare_nvme.sh' 00:00:28.708 Start stage prepare_nvme.sh 00:00:28.708 + [[ -n 7 ]] 00:00:28.708 + disk_prefix=ex7 00:00:28.708 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:28.708 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:28.708 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:28.708 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.708 ++ SPDK_TEST_NVMF=1 00:00:28.708 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.708 ++ SPDK_TEST_VFIOUSER=1 00:00:28.708 ++ SPDK_TEST_USDT=1 00:00:28.708 ++ SPDK_RUN_UBSAN=1 00:00:28.708 ++ SPDK_TEST_NVMF_MDNS=1 00:00:28.708 ++ NET_TYPE=virt 00:00:28.708 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:28.708 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.709 ++ RUN_NIGHTLY=1 00:00:28.709 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:28.709 + nvme_files=() 00:00:28.709 + declare -A nvme_files 00:00:28.709 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.709 + nvme_files['nvme.img']=5G 00:00:28.709 + nvme_files['nvme-cmb.img']=5G 00:00:28.709 + nvme_files['nvme-multi0.img']=4G 00:00:28.709 + nvme_files['nvme-multi1.img']=4G 00:00:28.709 + nvme_files['nvme-multi2.img']=4G 00:00:28.709 + nvme_files['nvme-openstack.img']=8G 00:00:28.709 + nvme_files['nvme-zns.img']=5G 00:00:28.709 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.709 + (( SPDK_TEST_FTL == 1 )) 00:00:28.709 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.709 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:28.709 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.709 + for nvme in "${!nvme_files[@]}" 00:00:28.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:28.968 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.968 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:28.968 + echo 'End stage prepare_nvme.sh' 00:00:28.968 End stage prepare_nvme.sh 00:00:28.982 [Pipeline] sh 00:00:29.264 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:29.264 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:29.264 00:00:29.264 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:29.264 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:29.264 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:29.264 HELP=0 00:00:29.264 DRY_RUN=0 00:00:29.264 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:29.264 NVME_DISKS_TYPE=nvme,nvme, 00:00:29.264 NVME_AUTO_CREATE=0 00:00:29.264 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:29.264 NVME_CMB=,, 00:00:29.264 NVME_PMR=,, 00:00:29.264 NVME_ZNS=,, 00:00:29.264 NVME_MS=,, 00:00:29.264 NVME_FDP=,, 00:00:29.265 SPDK_VAGRANT_DISTRO=fedora38 00:00:29.265 SPDK_VAGRANT_VMCPU=10 00:00:29.265 SPDK_VAGRANT_VMRAM=12288 00:00:29.265 SPDK_VAGRANT_PROVIDER=libvirt 00:00:29.265 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:29.265 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:29.265 SPDK_OPENSTACK_NETWORK=0 00:00:29.265 VAGRANT_PACKAGE_BOX=0 00:00:29.265 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:29.265 FORCE_DISTRO=true 00:00:29.265 VAGRANT_BOX_VERSION= 00:00:29.265 EXTRA_VAGRANTFILES= 00:00:29.265 NIC_MODEL=e1000 00:00:29.265 00:00:29.265 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:29.265 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:31.797 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.733 ==> default: Creating image (snapshot of base box volume). 00:00:33.031 ==> default: Creating domain with the following settings... 00:00:33.031 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714067730_1beba101fb234c1d15d1 00:00:33.031 ==> default: -- Domain type: kvm 00:00:33.031 ==> default: -- Cpus: 10 00:00:33.031 ==> default: -- Feature: acpi 00:00:33.031 ==> default: -- Feature: apic 00:00:33.031 ==> default: -- Feature: pae 00:00:33.031 ==> default: -- Memory: 12288M 00:00:33.031 ==> default: -- Memory Backing: hugepages: 00:00:33.031 ==> default: -- Management MAC: 00:00:33.031 ==> default: -- Loader: 00:00:33.031 ==> default: -- Nvram: 00:00:33.031 ==> default: -- Base box: spdk/fedora38 00:00:33.031 ==> default: -- Storage pool: default 00:00:33.031 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714067730_1beba101fb234c1d15d1.img (20G) 00:00:33.031 ==> default: -- Volume Cache: default 00:00:33.031 ==> default: -- Kernel: 00:00:33.031 ==> default: -- Initrd: 00:00:33.031 ==> default: -- Graphics Type: vnc 00:00:33.031 ==> default: -- Graphics Port: -1 00:00:33.031 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.031 ==> default: -- Graphics Password: Not defined 00:00:33.031 ==> default: -- Video Type: cirrus 00:00:33.031 ==> default: -- Video VRAM: 9216 00:00:33.031 ==> default: -- Sound Type: 00:00:33.031 ==> default: -- Keymap: en-us 00:00:33.031 ==> default: -- TPM Path: 00:00:33.031 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.031 ==> default: -- Command line args: 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:33.031 ==> default: -> value=-drive, 00:00:33.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:33.031 ==> default: -> value=-drive, 00:00:33.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.031 ==> default: -> value=-drive, 00:00:33.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.031 ==> default: -> value=-drive, 00:00:33.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:33.031 ==> default: -> value=-device, 00:00:33.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.301 ==> default: Creating shared folders metadata... 00:00:33.301 ==> default: Starting domain. 00:00:35.212 ==> default: Waiting for domain to get an IP address... 00:00:53.301 ==> default: Waiting for SSH to become available... 00:00:54.679 ==> default: Configuring and enabling network interfaces... 00:00:58.867 default: SSH address: 192.168.121.176:22 00:00:58.867 default: SSH username: vagrant 00:00:58.867 default: SSH auth method: private key 00:01:01.401 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.513 ==> default: Mounting SSHFS shared folder... 00:01:11.417 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.417 ==> default: Checking Mount.. 00:01:12.354 ==> default: Folder Successfully Mounted! 00:01:12.354 ==> default: Running provisioner: file... 00:01:13.292 default: ~/.gitconfig => .gitconfig 00:01:13.860 00:01:13.860 SUCCESS! 00:01:13.860 00:01:13.860 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:13.860 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.860 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:13.860 00:01:13.869 [Pipeline] } 00:01:13.887 [Pipeline] // stage 00:01:13.896 [Pipeline] dir 00:01:13.897 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:13.898 [Pipeline] { 00:01:13.915 [Pipeline] catchError 00:01:13.916 [Pipeline] { 00:01:13.931 [Pipeline] sh 00:01:14.210 + vagrant ssh-config --host vagrant 00:01:14.211 + sed -ne /^Host/,$p 00:01:14.211 + tee ssh_conf 00:01:17.502 Host vagrant 00:01:17.502 HostName 192.168.121.176 00:01:17.502 User vagrant 00:01:17.502 Port 22 00:01:17.502 UserKnownHostsFile /dev/null 00:01:17.502 StrictHostKeyChecking no 00:01:17.502 PasswordAuthentication no 00:01:17.502 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:17.502 IdentitiesOnly yes 00:01:17.502 LogLevel FATAL 00:01:17.502 ForwardAgent yes 00:01:17.502 ForwardX11 yes 00:01:17.502 00:01:17.547 [Pipeline] withEnv 00:01:17.548 [Pipeline] { 00:01:17.561 [Pipeline] sh 00:01:17.837 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.837 source /etc/os-release 00:01:17.837 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.837 # Minimal, systemd-like check. 00:01:17.837 if [[ -e /.dockerenv ]]; then 00:01:17.837 # Clear garbage from the node's name: 00:01:17.837 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.837 # $HOSTNAME is the actual container id 00:01:17.837 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.837 if mountpoint -q /etc/hostname; then 00:01:17.837 # We can assume this is a mount from a host where container is running, 00:01:17.837 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.838 container="$(< /etc/hostname) ($agent)" 00:01:17.838 else 00:01:17.838 # Fallback 00:01:17.838 container=$agent 00:01:17.838 fi 00:01:17.838 fi 00:01:17.838 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.838 00:01:18.105 [Pipeline] } 00:01:18.124 [Pipeline] // withEnv 00:01:18.130 [Pipeline] setCustomBuildProperty 00:01:18.143 [Pipeline] stage 00:01:18.145 [Pipeline] { (Tests) 00:01:18.158 [Pipeline] sh 00:01:18.435 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.707 [Pipeline] timeout 00:01:18.708 Timeout set to expire in 40 min 00:01:18.709 [Pipeline] { 00:01:18.723 [Pipeline] sh 00:01:19.001 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.568 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:19.591 [Pipeline] sh 00:01:19.871 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.147 [Pipeline] sh 00:01:20.426 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.699 [Pipeline] sh 00:01:20.979 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:21.239 ++ readlink -f spdk_repo 00:01:21.239 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.239 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.239 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.239 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.239 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.239 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.239 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.239 + cd /home/vagrant/spdk_repo 00:01:21.239 + source /etc/os-release 00:01:21.239 ++ NAME='Fedora Linux' 00:01:21.239 ++ VERSION='38 (Cloud Edition)' 00:01:21.239 ++ ID=fedora 00:01:21.239 ++ VERSION_ID=38 00:01:21.239 ++ VERSION_CODENAME= 00:01:21.239 ++ PLATFORM_ID=platform:f38 00:01:21.239 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.239 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.239 ++ LOGO=fedora-logo-icon 00:01:21.239 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.239 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.239 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.239 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.239 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.239 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.239 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.239 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.239 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.239 ++ SUPPORT_END=2024-05-14 00:01:21.239 ++ VARIANT='Cloud Edition' 00:01:21.239 ++ VARIANT_ID=cloud 00:01:21.239 + uname -a 00:01:21.239 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.239 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.239 Hugepages 00:01:21.239 node hugesize free / total 00:01:21.239 node0 1048576kB 0 / 0 00:01:21.239 node0 2048kB 0 / 0 00:01:21.239 00:01:21.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.239 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:21.239 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:21.498 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:21.498 + rm -f /tmp/spdk-ld-path 00:01:21.498 + source autorun-spdk.conf 00:01:21.498 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.498 ++ SPDK_TEST_NVMF=1 00:01:21.498 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.498 ++ SPDK_TEST_VFIOUSER=1 00:01:21.498 ++ SPDK_TEST_USDT=1 00:01:21.498 ++ SPDK_RUN_UBSAN=1 00:01:21.498 ++ SPDK_TEST_NVMF_MDNS=1 00:01:21.498 ++ NET_TYPE=virt 00:01:21.498 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:21.498 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.498 ++ RUN_NIGHTLY=1 00:01:21.498 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.498 + [[ -n '' ]] 00:01:21.498 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.498 + for M in /var/spdk/build-*-manifest.txt 00:01:21.498 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.498 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.498 + for M in /var/spdk/build-*-manifest.txt 00:01:21.498 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.498 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.498 ++ uname 00:01:21.498 + [[ Linux == \L\i\n\u\x ]] 00:01:21.498 + sudo dmesg -T 00:01:21.498 + sudo dmesg --clear 00:01:21.498 + dmesg_pid=5135 00:01:21.498 + [[ Fedora Linux == FreeBSD ]] 00:01:21.498 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.498 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.498 + sudo dmesg -Tw 00:01:21.498 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.498 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.498 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.498 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.498 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.498 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.498 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.498 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.498 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.498 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.498 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.498 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.498 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.498 Test configuration: 00:01:21.499 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.499 SPDK_TEST_NVMF=1 00:01:21.499 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.499 SPDK_TEST_VFIOUSER=1 00:01:21.499 SPDK_TEST_USDT=1 00:01:21.499 SPDK_RUN_UBSAN=1 00:01:21.499 SPDK_TEST_NVMF_MDNS=1 00:01:21.499 NET_TYPE=virt 00:01:21.499 SPDK_JSONRPC_GO_CLIENT=1 00:01:21.499 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.499 RUN_NIGHTLY=1 17:56:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.499 17:56:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.499 17:56:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.499 17:56:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.499 17:56:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.499 17:56:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.499 17:56:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.499 17:56:19 -- paths/export.sh@5 -- $ export PATH 00:01:21.499 17:56:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.499 17:56:19 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.499 17:56:19 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:21.499 17:56:19 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714067779.XXXXXX 00:01:21.499 17:56:19 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714067779.zMCkj2 00:01:21.499 17:56:19 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:21.499 17:56:19 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:21.499 17:56:19 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.499 17:56:19 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.499 17:56:19 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.499 17:56:19 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:21.499 17:56:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:21.499 17:56:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.758 17:56:19 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:21.758 17:56:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.758 17:56:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.758 17:56:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.758 17:56:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.758 Thu Apr 25 05:56:19 PM UTC 2024 00:01:21.758 17:56:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.758 LTS-24-g36faa8c31 00:01:21.758 17:56:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.758 17:56:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.758 17:56:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.758 17:56:19 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:21.758 17:56:19 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:21.758 17:56:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.758 ************************************ 00:01:21.758 START TEST ubsan 00:01:21.758 ************************************ 00:01:21.758 using ubsan 00:01:21.758 17:56:19 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:21.758 00:01:21.758 real 0m0.000s 00:01:21.758 user 0m0.000s 00:01:21.758 sys 0m0.000s 00:01:21.758 17:56:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.758 17:56:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.758 ************************************ 00:01:21.758 END TEST ubsan 00:01:21.758 ************************************ 00:01:21.758 17:56:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.758 17:56:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.758 17:56:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.758 17:56:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:22.018 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.018 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.276 Using 'verbs' RDMA provider 00:01:37.795 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:49.998 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.998 go version go1.21.1 linux/amd64 00:01:49.998 Creating mk/config.mk...done. 00:01:49.998 Creating mk/cc.flags.mk...done. 00:01:49.998 Type 'make' to build. 00:01:49.998 17:56:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:49.999 17:56:46 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:49.999 17:56:46 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:49.999 17:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.999 ************************************ 00:01:49.999 START TEST make 00:01:49.999 ************************************ 00:01:49.999 17:56:46 -- common/autotest_common.sh@1104 -- $ make -j10 00:01:49.999 make[1]: Nothing to be done for 'all'. 00:01:50.565 The Meson build system 00:01:50.565 Version: 1.3.1 00:01:50.565 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:50.565 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:50.565 Build type: native build 00:01:50.565 Project name: libvfio-user 00:01:50.565 Project version: 0.0.1 00:01:50.565 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.565 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.565 Host machine cpu family: x86_64 00:01:50.565 Host machine cpu: x86_64 00:01:50.565 Run-time dependency threads found: YES 00:01:50.565 Library dl found: YES 00:01:50.565 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.565 Run-time dependency json-c found: YES 0.17 00:01:50.565 Run-time dependency cmocka found: YES 1.1.7 00:01:50.565 Program pytest-3 found: NO 00:01:50.565 Program flake8 found: NO 00:01:50.565 Program misspell-fixer found: NO 00:01:50.565 Program restructuredtext-lint found: NO 00:01:50.565 Program valgrind found: YES (/usr/bin/valgrind) 00:01:50.565 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.565 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.565 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.565 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.565 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:50.565 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:50.565 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.565 Build targets in project: 8 00:01:50.565 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:50.565 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:50.565 00:01:50.566 libvfio-user 0.0.1 00:01:50.566 00:01:50.566 User defined options 00:01:50.566 buildtype : debug 00:01:50.566 default_library: shared 00:01:50.566 libdir : /usr/local/lib 00:01:50.566 00:01:50.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.824 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:51.083 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:51.083 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:51.083 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:51.083 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:51.083 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:51.083 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:51.083 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:51.083 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:51.083 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:51.341 [10/37] Compiling C object samples/null.p/null.c.o 00:01:51.341 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:51.341 [12/37] Compiling C object samples/client.p/client.c.o 00:01:51.341 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:51.341 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:51.341 [15/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:51.341 [16/37] Linking target samples/client 00:01:51.341 [17/37] Compiling C object samples/server.p/server.c.o 00:01:51.341 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:51.341 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:51.341 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:51.341 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:51.341 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:51.341 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:51.341 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:51.341 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:51.341 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:51.599 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:01:51.599 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:51.599 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:51.599 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:51.599 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:51.599 [32/37] Linking target samples/null 00:01:51.599 [33/37] Linking target samples/server 00:01:51.599 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:51.599 [35/37] Linking target samples/gpio-pci-idio-16 00:01:51.599 [36/37] Linking target test/unit_tests 00:01:51.599 [37/37] Linking target samples/lspci 00:01:51.857 INFO: autodetecting backend as ninja 00:01:51.857 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:51.857 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:52.116 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:52.116 ninja: no work to do. 00:02:02.115 The Meson build system 00:02:02.115 Version: 1.3.1 00:02:02.115 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.115 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.115 Build type: native build 00:02:02.115 Program cat found: YES (/usr/bin/cat) 00:02:02.115 Project name: DPDK 00:02:02.115 Project version: 23.11.0 00:02:02.115 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.115 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.115 Host machine cpu family: x86_64 00:02:02.115 Host machine cpu: x86_64 00:02:02.115 Message: ## Building in Developer Mode ## 00:02:02.115 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.115 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.115 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.115 Program python3 found: YES (/usr/bin/python3) 00:02:02.115 Program cat found: YES (/usr/bin/cat) 00:02:02.115 Compiler for C supports arguments -march=native: YES 00:02:02.115 Checking for size of "void *" : 8 00:02:02.115 Checking for size of "void *" : 8 (cached) 00:02:02.115 Library m found: YES 00:02:02.115 Library numa found: YES 00:02:02.115 Has header "numaif.h" : YES 00:02:02.115 Library fdt found: NO 00:02:02.115 Library execinfo found: NO 00:02:02.115 Has header "execinfo.h" : YES 00:02:02.115 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.115 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.115 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.115 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.115 Run-time dependency openssl found: YES 3.0.9 00:02:02.115 Run-time dependency libpcap found: YES 1.10.4 00:02:02.115 Has header "pcap.h" with dependency libpcap: YES 00:02:02.115 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.115 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.115 Compiler for C supports arguments -Wformat: YES 00:02:02.115 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.115 Compiler for C supports arguments -Wformat-security: NO 00:02:02.115 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.115 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.115 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.115 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.115 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.115 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.115 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.115 Compiler for C supports arguments -Wundef: YES 00:02:02.115 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.115 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.115 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.115 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.115 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.115 Program objdump found: YES (/usr/bin/objdump) 00:02:02.115 Compiler for C supports arguments -mavx512f: YES 00:02:02.115 Checking if "AVX512 checking" compiles: YES 00:02:02.115 Fetching value of define "__SSE4_2__" : 1 00:02:02.115 Fetching value of define "__AES__" : 1 00:02:02.115 Fetching value of define "__AVX__" : 1 00:02:02.115 Fetching value of define "__AVX2__" : 1 00:02:02.115 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.115 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.115 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.115 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.115 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.115 Fetching value of define "__PCLMUL__" : 1 00:02:02.115 Fetching value of define "__RDRND__" : 1 00:02:02.115 Fetching value of define "__RDSEED__" : 1 00:02:02.115 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.115 Fetching value of define "__znver1__" : (undefined) 00:02:02.115 Fetching value of define "__znver2__" : (undefined) 00:02:02.115 Fetching value of define "__znver3__" : (undefined) 00:02:02.115 Fetching value of define "__znver4__" : (undefined) 00:02:02.115 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.115 Message: lib/log: Defining dependency "log" 00:02:02.115 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.115 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.115 Checking for function "getentropy" : NO 00:02:02.115 Message: lib/eal: Defining dependency "eal" 00:02:02.115 Message: lib/ring: Defining dependency "ring" 00:02:02.115 Message: lib/rcu: Defining dependency "rcu" 00:02:02.115 Message: lib/mempool: Defining dependency "mempool" 00:02:02.115 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.115 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.115 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.115 Compiler for C supports arguments -mpclmul: YES 00:02:02.115 Compiler for C supports arguments -maes: YES 00:02:02.115 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.115 Compiler for C supports arguments -mavx512bw: YES 00:02:02.115 Compiler for C supports arguments -mavx512dq: YES 00:02:02.115 Compiler for C supports arguments -mavx512vl: YES 00:02:02.115 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.115 Compiler for C supports arguments -mavx2: YES 00:02:02.115 Compiler for C supports arguments -mavx: YES 00:02:02.115 Message: lib/net: Defining dependency "net" 00:02:02.115 Message: lib/meter: Defining dependency "meter" 00:02:02.115 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.115 Message: lib/pci: Defining dependency "pci" 00:02:02.115 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.115 Message: lib/hash: Defining dependency "hash" 00:02:02.115 Message: lib/timer: Defining dependency "timer" 00:02:02.115 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.115 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.115 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.115 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.115 Message: lib/power: Defining dependency "power" 00:02:02.115 Message: lib/reorder: Defining dependency "reorder" 00:02:02.115 Message: lib/security: Defining dependency "security" 00:02:02.115 Has header "linux/userfaultfd.h" : YES 00:02:02.115 Has header "linux/vduse.h" : YES 00:02:02.115 Message: lib/vhost: Defining dependency "vhost" 00:02:02.115 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.115 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.115 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.115 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.115 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.115 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.115 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.115 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.115 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.115 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.115 Program doxygen found: YES (/usr/bin/doxygen) 00:02:02.115 Configuring doxy-api-html.conf using configuration 00:02:02.115 Configuring doxy-api-man.conf using configuration 00:02:02.115 Program mandb found: YES (/usr/bin/mandb) 00:02:02.115 Program sphinx-build found: NO 00:02:02.115 Configuring rte_build_config.h using configuration 00:02:02.115 Message: 00:02:02.115 ================= 00:02:02.115 Applications Enabled 00:02:02.115 ================= 00:02:02.115 00:02:02.115 apps: 00:02:02.115 00:02:02.115 00:02:02.115 Message: 00:02:02.115 ================= 00:02:02.115 Libraries Enabled 00:02:02.115 ================= 00:02:02.115 00:02:02.115 libs: 00:02:02.115 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.115 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.115 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.115 00:02:02.115 Message: 00:02:02.115 =============== 00:02:02.115 Drivers Enabled 00:02:02.115 =============== 00:02:02.115 00:02:02.115 common: 00:02:02.115 00:02:02.115 bus: 00:02:02.115 pci, vdev, 00:02:02.115 mempool: 00:02:02.115 ring, 00:02:02.115 dma: 00:02:02.115 00:02:02.115 net: 00:02:02.115 00:02:02.115 crypto: 00:02:02.115 00:02:02.115 compress: 00:02:02.115 00:02:02.115 vdpa: 00:02:02.115 00:02:02.115 00:02:02.115 Message: 00:02:02.115 ================= 00:02:02.115 Content Skipped 00:02:02.115 ================= 00:02:02.115 00:02:02.115 apps: 00:02:02.115 dumpcap: explicitly disabled via build config 00:02:02.115 graph: explicitly disabled via build config 00:02:02.115 pdump: explicitly disabled via build config 00:02:02.115 proc-info: explicitly disabled via build config 00:02:02.115 test-acl: explicitly disabled via build config 00:02:02.115 test-bbdev: explicitly disabled via build config 00:02:02.116 test-cmdline: explicitly disabled via build config 00:02:02.116 test-compress-perf: explicitly disabled via build config 00:02:02.116 test-crypto-perf: explicitly disabled via build config 00:02:02.116 test-dma-perf: explicitly disabled via build config 00:02:02.116 test-eventdev: explicitly disabled via build config 00:02:02.116 test-fib: explicitly disabled via build config 00:02:02.116 test-flow-perf: explicitly disabled via build config 00:02:02.116 test-gpudev: explicitly disabled via build config 00:02:02.116 test-mldev: explicitly disabled via build config 00:02:02.116 test-pipeline: explicitly disabled via build config 00:02:02.116 test-pmd: explicitly disabled via build config 00:02:02.116 test-regex: explicitly disabled via build config 00:02:02.116 test-sad: explicitly disabled via build config 00:02:02.116 test-security-perf: explicitly disabled via build config 00:02:02.116 00:02:02.116 libs: 00:02:02.116 metrics: explicitly disabled via build config 00:02:02.116 acl: explicitly disabled via build config 00:02:02.116 bbdev: explicitly disabled via build config 00:02:02.116 bitratestats: explicitly disabled via build config 00:02:02.116 bpf: explicitly disabled via build config 00:02:02.116 cfgfile: explicitly disabled via build config 00:02:02.116 distributor: explicitly disabled via build config 00:02:02.116 efd: explicitly disabled via build config 00:02:02.116 eventdev: explicitly disabled via build config 00:02:02.116 dispatcher: explicitly disabled via build config 00:02:02.116 gpudev: explicitly disabled via build config 00:02:02.116 gro: explicitly disabled via build config 00:02:02.116 gso: explicitly disabled via build config 00:02:02.116 ip_frag: explicitly disabled via build config 00:02:02.116 jobstats: explicitly disabled via build config 00:02:02.116 latencystats: explicitly disabled via build config 00:02:02.116 lpm: explicitly disabled via build config 00:02:02.116 member: explicitly disabled via build config 00:02:02.116 pcapng: explicitly disabled via build config 00:02:02.116 rawdev: explicitly disabled via build config 00:02:02.116 regexdev: explicitly disabled via build config 00:02:02.116 mldev: explicitly disabled via build config 00:02:02.116 rib: explicitly disabled via build config 00:02:02.116 sched: explicitly disabled via build config 00:02:02.116 stack: explicitly disabled via build config 00:02:02.116 ipsec: explicitly disabled via build config 00:02:02.116 pdcp: explicitly disabled via build config 00:02:02.116 fib: explicitly disabled via build config 00:02:02.116 port: explicitly disabled via build config 00:02:02.116 pdump: explicitly disabled via build config 00:02:02.116 table: explicitly disabled via build config 00:02:02.116 pipeline: explicitly disabled via build config 00:02:02.116 graph: explicitly disabled via build config 00:02:02.116 node: explicitly disabled via build config 00:02:02.116 00:02:02.116 drivers: 00:02:02.116 common/cpt: not in enabled drivers build config 00:02:02.116 common/dpaax: not in enabled drivers build config 00:02:02.116 common/iavf: not in enabled drivers build config 00:02:02.116 common/idpf: not in enabled drivers build config 00:02:02.116 common/mvep: not in enabled drivers build config 00:02:02.116 common/octeontx: not in enabled drivers build config 00:02:02.116 bus/auxiliary: not in enabled drivers build config 00:02:02.116 bus/cdx: not in enabled drivers build config 00:02:02.116 bus/dpaa: not in enabled drivers build config 00:02:02.116 bus/fslmc: not in enabled drivers build config 00:02:02.116 bus/ifpga: not in enabled drivers build config 00:02:02.116 bus/platform: not in enabled drivers build config 00:02:02.116 bus/vmbus: not in enabled drivers build config 00:02:02.116 common/cnxk: not in enabled drivers build config 00:02:02.116 common/mlx5: not in enabled drivers build config 00:02:02.116 common/nfp: not in enabled drivers build config 00:02:02.116 common/qat: not in enabled drivers build config 00:02:02.116 common/sfc_efx: not in enabled drivers build config 00:02:02.116 mempool/bucket: not in enabled drivers build config 00:02:02.116 mempool/cnxk: not in enabled drivers build config 00:02:02.116 mempool/dpaa: not in enabled drivers build config 00:02:02.116 mempool/dpaa2: not in enabled drivers build config 00:02:02.116 mempool/octeontx: not in enabled drivers build config 00:02:02.116 mempool/stack: not in enabled drivers build config 00:02:02.116 dma/cnxk: not in enabled drivers build config 00:02:02.116 dma/dpaa: not in enabled drivers build config 00:02:02.116 dma/dpaa2: not in enabled drivers build config 00:02:02.116 dma/hisilicon: not in enabled drivers build config 00:02:02.116 dma/idxd: not in enabled drivers build config 00:02:02.116 dma/ioat: not in enabled drivers build config 00:02:02.116 dma/skeleton: not in enabled drivers build config 00:02:02.116 net/af_packet: not in enabled drivers build config 00:02:02.116 net/af_xdp: not in enabled drivers build config 00:02:02.116 net/ark: not in enabled drivers build config 00:02:02.116 net/atlantic: not in enabled drivers build config 00:02:02.116 net/avp: not in enabled drivers build config 00:02:02.116 net/axgbe: not in enabled drivers build config 00:02:02.116 net/bnx2x: not in enabled drivers build config 00:02:02.116 net/bnxt: not in enabled drivers build config 00:02:02.116 net/bonding: not in enabled drivers build config 00:02:02.116 net/cnxk: not in enabled drivers build config 00:02:02.116 net/cpfl: not in enabled drivers build config 00:02:02.116 net/cxgbe: not in enabled drivers build config 00:02:02.116 net/dpaa: not in enabled drivers build config 00:02:02.116 net/dpaa2: not in enabled drivers build config 00:02:02.116 net/e1000: not in enabled drivers build config 00:02:02.116 net/ena: not in enabled drivers build config 00:02:02.116 net/enetc: not in enabled drivers build config 00:02:02.116 net/enetfec: not in enabled drivers build config 00:02:02.116 net/enic: not in enabled drivers build config 00:02:02.116 net/failsafe: not in enabled drivers build config 00:02:02.116 net/fm10k: not in enabled drivers build config 00:02:02.116 net/gve: not in enabled drivers build config 00:02:02.116 net/hinic: not in enabled drivers build config 00:02:02.116 net/hns3: not in enabled drivers build config 00:02:02.116 net/i40e: not in enabled drivers build config 00:02:02.116 net/iavf: not in enabled drivers build config 00:02:02.116 net/ice: not in enabled drivers build config 00:02:02.116 net/idpf: not in enabled drivers build config 00:02:02.116 net/igc: not in enabled drivers build config 00:02:02.116 net/ionic: not in enabled drivers build config 00:02:02.116 net/ipn3ke: not in enabled drivers build config 00:02:02.116 net/ixgbe: not in enabled drivers build config 00:02:02.116 net/mana: not in enabled drivers build config 00:02:02.116 net/memif: not in enabled drivers build config 00:02:02.116 net/mlx4: not in enabled drivers build config 00:02:02.116 net/mlx5: not in enabled drivers build config 00:02:02.116 net/mvneta: not in enabled drivers build config 00:02:02.116 net/mvpp2: not in enabled drivers build config 00:02:02.116 net/netvsc: not in enabled drivers build config 00:02:02.116 net/nfb: not in enabled drivers build config 00:02:02.116 net/nfp: not in enabled drivers build config 00:02:02.116 net/ngbe: not in enabled drivers build config 00:02:02.116 net/null: not in enabled drivers build config 00:02:02.116 net/octeontx: not in enabled drivers build config 00:02:02.116 net/octeon_ep: not in enabled drivers build config 00:02:02.116 net/pcap: not in enabled drivers build config 00:02:02.116 net/pfe: not in enabled drivers build config 00:02:02.116 net/qede: not in enabled drivers build config 00:02:02.116 net/ring: not in enabled drivers build config 00:02:02.116 net/sfc: not in enabled drivers build config 00:02:02.116 net/softnic: not in enabled drivers build config 00:02:02.116 net/tap: not in enabled drivers build config 00:02:02.116 net/thunderx: not in enabled drivers build config 00:02:02.116 net/txgbe: not in enabled drivers build config 00:02:02.116 net/vdev_netvsc: not in enabled drivers build config 00:02:02.116 net/vhost: not in enabled drivers build config 00:02:02.116 net/virtio: not in enabled drivers build config 00:02:02.116 net/vmxnet3: not in enabled drivers build config 00:02:02.116 raw/*: missing internal dependency, "rawdev" 00:02:02.116 crypto/armv8: not in enabled drivers build config 00:02:02.116 crypto/bcmfs: not in enabled drivers build config 00:02:02.116 crypto/caam_jr: not in enabled drivers build config 00:02:02.116 crypto/ccp: not in enabled drivers build config 00:02:02.116 crypto/cnxk: not in enabled drivers build config 00:02:02.116 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.116 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.116 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.116 crypto/mlx5: not in enabled drivers build config 00:02:02.116 crypto/mvsam: not in enabled drivers build config 00:02:02.116 crypto/nitrox: not in enabled drivers build config 00:02:02.116 crypto/null: not in enabled drivers build config 00:02:02.116 crypto/octeontx: not in enabled drivers build config 00:02:02.116 crypto/openssl: not in enabled drivers build config 00:02:02.116 crypto/scheduler: not in enabled drivers build config 00:02:02.116 crypto/uadk: not in enabled drivers build config 00:02:02.116 crypto/virtio: not in enabled drivers build config 00:02:02.116 compress/isal: not in enabled drivers build config 00:02:02.116 compress/mlx5: not in enabled drivers build config 00:02:02.116 compress/octeontx: not in enabled drivers build config 00:02:02.116 compress/zlib: not in enabled drivers build config 00:02:02.116 regex/*: missing internal dependency, "regexdev" 00:02:02.116 ml/*: missing internal dependency, "mldev" 00:02:02.116 vdpa/ifc: not in enabled drivers build config 00:02:02.116 vdpa/mlx5: not in enabled drivers build config 00:02:02.116 vdpa/nfp: not in enabled drivers build config 00:02:02.116 vdpa/sfc: not in enabled drivers build config 00:02:02.116 event/*: missing internal dependency, "eventdev" 00:02:02.116 baseband/*: missing internal dependency, "bbdev" 00:02:02.116 gpu/*: missing internal dependency, "gpudev" 00:02:02.116 00:02:02.116 00:02:02.116 Build targets in project: 85 00:02:02.116 00:02:02.116 DPDK 23.11.0 00:02:02.116 00:02:02.116 User defined options 00:02:02.116 buildtype : debug 00:02:02.116 default_library : shared 00:02:02.116 libdir : lib 00:02:02.116 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.116 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:02.116 c_link_args : 00:02:02.117 cpu_instruction_set: native 00:02:02.117 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:02.117 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:02.117 enable_docs : false 00:02:02.117 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.117 enable_kmods : false 00:02:02.117 tests : false 00:02:02.117 00:02:02.117 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.117 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:02.117 [1/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.117 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.117 [3/265] Linking static target lib/librte_log.a 00:02:02.117 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.117 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.117 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.117 [7/265] Linking static target lib/librte_kvargs.a 00:02:02.117 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.117 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.117 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.117 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.375 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.375 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.375 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.375 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.632 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.633 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.633 [18/265] Linking static target lib/librte_telemetry.a 00:02:02.633 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.633 [20/265] Linking target lib/librte_log.so.24.0 00:02:02.633 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.633 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.891 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.891 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:02.891 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:03.150 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.150 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.151 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.151 [29/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:03.151 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.409 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.409 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.409 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:03.668 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.668 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.668 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.668 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:03.668 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.668 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.013 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.013 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.013 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.013 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.013 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.272 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.272 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.272 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.272 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.531 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.531 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.531 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.789 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.789 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.789 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.047 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.047 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.047 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.308 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.308 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.308 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.308 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.308 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.308 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.569 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.569 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.827 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.827 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.827 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.084 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.084 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.343 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.343 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.343 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.343 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.343 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.343 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.343 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.343 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.601 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.858 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.858 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.858 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.114 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.114 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.372 [85/265] Linking static target lib/librte_eal.a 00:02:07.372 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.372 [87/265] Linking static target lib/librte_ring.a 00:02:07.372 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.629 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.629 [90/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.629 [91/265] Linking static target lib/librte_rcu.a 00:02:07.629 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.629 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.887 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.887 [95/265] Linking static target lib/librte_mempool.a 00:02:07.887 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.145 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.145 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.145 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.145 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.404 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.404 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.404 [103/265] Linking static target lib/librte_mbuf.a 00:02:08.404 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.662 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.662 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.229 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.229 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.229 [109/265] Linking static target lib/librte_meter.a 00:02:09.229 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.229 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.229 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.229 [113/265] Linking static target lib/librte_net.a 00:02:09.229 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.488 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.488 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.746 [117/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.746 [118/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.005 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.264 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.834 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.834 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:10.834 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:10.834 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.834 [125/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.834 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.834 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.834 [128/265] Linking static target lib/librte_pci.a 00:02:11.093 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.093 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.351 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.351 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.351 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.351 [134/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.351 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.351 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.351 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.610 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.610 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.610 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.610 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.610 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.869 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.869 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.869 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.869 [146/265] Linking static target lib/librte_ethdev.a 00:02:11.869 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.869 [148/265] Linking static target lib/librte_cmdline.a 00:02:12.127 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.386 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.386 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.644 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.644 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.644 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.644 [155/265] Linking static target lib/librte_timer.a 00:02:12.644 [156/265] Linking static target lib/librte_hash.a 00:02:12.644 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.645 [158/265] Linking static target lib/librte_compressdev.a 00:02:12.645 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.904 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.162 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.162 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.162 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.162 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.162 [165/265] Linking static target lib/librte_dmadev.a 00:02:13.421 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.680 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.680 [168/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.680 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.680 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.680 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.681 [172/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.681 [173/265] Linking static target lib/librte_cryptodev.a 00:02:13.681 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.940 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.940 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.199 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.199 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.457 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.457 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.457 [181/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.457 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.457 [183/265] Linking static target lib/librte_power.a 00:02:14.457 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.457 [185/265] Linking static target lib/librte_reorder.a 00:02:14.716 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.716 [187/265] Linking static target lib/librte_security.a 00:02:14.974 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.974 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.974 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.232 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.491 [192/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.491 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.749 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.749 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.007 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.007 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.267 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.267 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:16.267 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.526 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.526 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.785 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.785 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.785 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.785 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.785 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.785 [208/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.785 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.044 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.044 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.044 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.044 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.044 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.044 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.044 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:17.044 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:17.302 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.302 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.302 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.302 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.302 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.302 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.302 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:17.561 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.495 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.495 [227/265] Linking static target lib/librte_vhost.a 00:02:19.063 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.063 [229/265] Linking target lib/librte_eal.so.24.0 00:02:19.063 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.321 [231/265] Linking target lib/librte_pci.so.24.0 00:02:19.321 [232/265] Linking target lib/librte_ring.so.24.0 00:02:19.321 [233/265] Linking target lib/librte_meter.so.24.0 00:02:19.321 [234/265] Linking target lib/librte_timer.so.24.0 00:02:19.321 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:19.321 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:19.321 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:19.321 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:19.321 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:19.321 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:19.321 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:19.321 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:19.321 [243/265] Linking target lib/librte_mempool.so.24.0 00:02:19.321 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:19.578 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:19.578 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:19.578 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:19.578 [248/265] Linking target lib/librte_mbuf.so.24.0 00:02:19.578 [249/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.835 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:19.835 [251/265] Linking target lib/librte_net.so.24.0 00:02:19.835 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:19.835 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:19.835 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:20.093 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.093 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:20.093 [257/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.093 [258/265] Linking target lib/librte_hash.so.24.0 00:02:20.093 [259/265] Linking target lib/librte_security.so.24.0 00:02:20.093 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:20.093 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:20.093 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:20.093 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:20.351 [264/265] Linking target lib/librte_power.so.24.0 00:02:20.351 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:20.351 INFO: autodetecting backend as ninja 00:02:20.351 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:21.314 CC lib/log/log.o 00:02:21.314 CC lib/log/log_flags.o 00:02:21.314 CC lib/log/log_deprecated.o 00:02:21.314 CC lib/ut_mock/mock.o 00:02:21.314 CC lib/ut/ut.o 00:02:21.572 LIB libspdk_ut_mock.a 00:02:21.573 SO libspdk_ut_mock.so.5.0 00:02:21.573 LIB libspdk_ut.a 00:02:21.573 LIB libspdk_log.a 00:02:21.573 SO libspdk_ut.so.1.0 00:02:21.573 SYMLINK libspdk_ut_mock.so 00:02:21.573 SO libspdk_log.so.6.1 00:02:21.573 SYMLINK libspdk_ut.so 00:02:21.831 SYMLINK libspdk_log.so 00:02:21.831 CC lib/dma/dma.o 00:02:21.831 CC lib/util/base64.o 00:02:21.831 CC lib/ioat/ioat.o 00:02:21.831 CC lib/util/bit_array.o 00:02:21.831 CC lib/util/crc16.o 00:02:21.831 CC lib/util/cpuset.o 00:02:21.831 CC lib/util/crc32c.o 00:02:21.831 CC lib/util/crc32.o 00:02:21.831 CXX lib/trace_parser/trace.o 00:02:21.831 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.088 CC lib/util/crc32_ieee.o 00:02:22.088 CC lib/util/crc64.o 00:02:22.088 CC lib/util/dif.o 00:02:22.088 CC lib/util/fd.o 00:02:22.088 LIB libspdk_dma.a 00:02:22.088 SO libspdk_dma.so.3.0 00:02:22.088 CC lib/util/file.o 00:02:22.088 CC lib/vfio_user/host/vfio_user.o 00:02:22.088 CC lib/util/hexlify.o 00:02:22.088 LIB libspdk_ioat.a 00:02:22.088 SO libspdk_ioat.so.6.0 00:02:22.088 SYMLINK libspdk_dma.so 00:02:22.088 CC lib/util/iov.o 00:02:22.088 CC lib/util/math.o 00:02:22.088 CC lib/util/pipe.o 00:02:22.346 SYMLINK libspdk_ioat.so 00:02:22.346 CC lib/util/strerror_tls.o 00:02:22.346 CC lib/util/string.o 00:02:22.346 CC lib/util/uuid.o 00:02:22.346 CC lib/util/fd_group.o 00:02:22.346 LIB libspdk_vfio_user.a 00:02:22.346 CC lib/util/xor.o 00:02:22.346 CC lib/util/zipf.o 00:02:22.346 SO libspdk_vfio_user.so.4.0 00:02:22.346 SYMLINK libspdk_vfio_user.so 00:02:22.604 LIB libspdk_util.a 00:02:22.862 SO libspdk_util.so.8.0 00:02:22.862 SYMLINK libspdk_util.so 00:02:22.862 LIB libspdk_trace_parser.a 00:02:22.862 SO libspdk_trace_parser.so.4.0 00:02:23.121 CC lib/rdma/common.o 00:02:23.121 CC lib/vmd/vmd.o 00:02:23.121 CC lib/rdma/rdma_verbs.o 00:02:23.121 CC lib/vmd/led.o 00:02:23.121 CC lib/conf/conf.o 00:02:23.121 CC lib/json/json_parse.o 00:02:23.121 CC lib/idxd/idxd.o 00:02:23.121 CC lib/idxd/idxd_user.o 00:02:23.121 CC lib/env_dpdk/env.o 00:02:23.121 SYMLINK libspdk_trace_parser.so 00:02:23.121 CC lib/json/json_util.o 00:02:23.121 CC lib/json/json_write.o 00:02:23.379 CC lib/env_dpdk/memory.o 00:02:23.379 LIB libspdk_conf.a 00:02:23.379 CC lib/env_dpdk/pci.o 00:02:23.379 SO libspdk_conf.so.5.0 00:02:23.379 CC lib/env_dpdk/init.o 00:02:23.379 SYMLINK libspdk_conf.so 00:02:23.379 LIB libspdk_rdma.a 00:02:23.379 CC lib/env_dpdk/threads.o 00:02:23.379 CC lib/env_dpdk/pci_ioat.o 00:02:23.379 SO libspdk_rdma.so.5.0 00:02:23.379 LIB libspdk_json.a 00:02:23.379 SYMLINK libspdk_rdma.so 00:02:23.637 CC lib/env_dpdk/pci_virtio.o 00:02:23.637 CC lib/env_dpdk/pci_vmd.o 00:02:23.637 SO libspdk_json.so.5.1 00:02:23.637 CC lib/env_dpdk/pci_idxd.o 00:02:23.637 SYMLINK libspdk_json.so 00:02:23.637 CC lib/env_dpdk/pci_event.o 00:02:23.637 LIB libspdk_idxd.a 00:02:23.637 CC lib/env_dpdk/sigbus_handler.o 00:02:23.637 CC lib/env_dpdk/pci_dpdk.o 00:02:23.637 SO libspdk_idxd.so.11.0 00:02:23.637 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.637 LIB libspdk_vmd.a 00:02:23.637 SYMLINK libspdk_idxd.so 00:02:23.637 SO libspdk_vmd.so.5.0 00:02:23.637 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.896 SYMLINK libspdk_vmd.so 00:02:23.896 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.896 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.896 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.896 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.155 LIB libspdk_jsonrpc.a 00:02:24.155 SO libspdk_jsonrpc.so.5.1 00:02:24.155 SYMLINK libspdk_jsonrpc.so 00:02:24.413 CC lib/rpc/rpc.o 00:02:24.413 LIB libspdk_env_dpdk.a 00:02:24.672 LIB libspdk_rpc.a 00:02:24.672 SO libspdk_env_dpdk.so.13.0 00:02:24.672 SO libspdk_rpc.so.5.0 00:02:24.672 SYMLINK libspdk_rpc.so 00:02:24.672 SYMLINK libspdk_env_dpdk.so 00:02:24.930 CC lib/trace/trace.o 00:02:24.930 CC lib/trace/trace_flags.o 00:02:24.930 CC lib/trace/trace_rpc.o 00:02:24.930 CC lib/notify/notify.o 00:02:24.930 CC lib/sock/sock.o 00:02:24.930 CC lib/notify/notify_rpc.o 00:02:24.930 CC lib/sock/sock_rpc.o 00:02:24.930 LIB libspdk_notify.a 00:02:25.188 SO libspdk_notify.so.5.0 00:02:25.188 LIB libspdk_trace.a 00:02:25.188 SYMLINK libspdk_notify.so 00:02:25.188 SO libspdk_trace.so.9.0 00:02:25.188 LIB libspdk_sock.a 00:02:25.188 SYMLINK libspdk_trace.so 00:02:25.188 SO libspdk_sock.so.8.0 00:02:25.446 SYMLINK libspdk_sock.so 00:02:25.446 CC lib/thread/thread.o 00:02:25.446 CC lib/thread/iobuf.o 00:02:25.446 CC lib/nvme/nvme_ctrlr.o 00:02:25.446 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.446 CC lib/nvme/nvme_fabric.o 00:02:25.446 CC lib/nvme/nvme_ns_cmd.o 00:02:25.446 CC lib/nvme/nvme_pcie_common.o 00:02:25.446 CC lib/nvme/nvme_qpair.o 00:02:25.446 CC lib/nvme/nvme_ns.o 00:02:25.446 CC lib/nvme/nvme_pcie.o 00:02:25.715 CC lib/nvme/nvme.o 00:02:26.282 CC lib/nvme/nvme_quirks.o 00:02:26.282 CC lib/nvme/nvme_transport.o 00:02:26.538 CC lib/nvme/nvme_discovery.o 00:02:26.538 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.538 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.538 CC lib/nvme/nvme_tcp.o 00:02:26.538 CC lib/nvme/nvme_opal.o 00:02:26.538 CC lib/nvme/nvme_io_msg.o 00:02:27.103 LIB libspdk_thread.a 00:02:27.103 CC lib/nvme/nvme_poll_group.o 00:02:27.103 SO libspdk_thread.so.9.0 00:02:27.103 SYMLINK libspdk_thread.so 00:02:27.103 CC lib/nvme/nvme_zns.o 00:02:27.103 CC lib/nvme/nvme_cuse.o 00:02:27.103 CC lib/nvme/nvme_vfio_user.o 00:02:27.103 CC lib/nvme/nvme_rdma.o 00:02:27.361 CC lib/accel/accel.o 00:02:27.361 CC lib/blob/blobstore.o 00:02:27.361 CC lib/blob/request.o 00:02:27.619 CC lib/blob/zeroes.o 00:02:27.619 CC lib/blob/blob_bs_dev.o 00:02:27.888 CC lib/init/json_config.o 00:02:27.888 CC lib/init/subsystem.o 00:02:27.888 CC lib/init/subsystem_rpc.o 00:02:27.888 CC lib/virtio/virtio.o 00:02:27.888 CC lib/virtio/virtio_vhost_user.o 00:02:27.888 CC lib/accel/accel_rpc.o 00:02:27.888 CC lib/init/rpc.o 00:02:27.888 CC lib/virtio/virtio_vfio_user.o 00:02:28.160 CC lib/virtio/virtio_pci.o 00:02:28.160 CC lib/vfu_tgt/tgt_endpoint.o 00:02:28.160 LIB libspdk_init.a 00:02:28.160 CC lib/vfu_tgt/tgt_rpc.o 00:02:28.160 SO libspdk_init.so.4.0 00:02:28.160 CC lib/accel/accel_sw.o 00:02:28.160 SYMLINK libspdk_init.so 00:02:28.418 LIB libspdk_virtio.a 00:02:28.418 CC lib/event/reactor.o 00:02:28.418 CC lib/event/app.o 00:02:28.418 CC lib/event/log_rpc.o 00:02:28.418 CC lib/event/app_rpc.o 00:02:28.418 CC lib/event/scheduler_static.o 00:02:28.418 LIB libspdk_vfu_tgt.a 00:02:28.418 SO libspdk_virtio.so.6.0 00:02:28.418 SO libspdk_vfu_tgt.so.2.0 00:02:28.418 LIB libspdk_accel.a 00:02:28.418 LIB libspdk_nvme.a 00:02:28.418 SYMLINK libspdk_virtio.so 00:02:28.418 SYMLINK libspdk_vfu_tgt.so 00:02:28.418 SO libspdk_accel.so.14.0 00:02:28.677 SYMLINK libspdk_accel.so 00:02:28.677 SO libspdk_nvme.so.12.0 00:02:28.677 CC lib/bdev/bdev.o 00:02:28.677 CC lib/bdev/bdev_rpc.o 00:02:28.677 CC lib/bdev/bdev_zone.o 00:02:28.677 CC lib/bdev/part.o 00:02:28.677 CC lib/bdev/scsi_nvme.o 00:02:28.677 LIB libspdk_event.a 00:02:28.936 SO libspdk_event.so.12.0 00:02:28.936 SYMLINK libspdk_event.so 00:02:28.936 SYMLINK libspdk_nvme.so 00:02:30.314 LIB libspdk_blob.a 00:02:30.314 SO libspdk_blob.so.10.1 00:02:30.314 SYMLINK libspdk_blob.so 00:02:30.314 CC lib/lvol/lvol.o 00:02:30.314 CC lib/blobfs/blobfs.o 00:02:30.314 CC lib/blobfs/tree.o 00:02:31.252 LIB libspdk_blobfs.a 00:02:31.252 SO libspdk_blobfs.so.9.0 00:02:31.252 SYMLINK libspdk_blobfs.so 00:02:31.252 LIB libspdk_lvol.a 00:02:31.516 LIB libspdk_bdev.a 00:02:31.516 SO libspdk_lvol.so.9.1 00:02:31.516 SO libspdk_bdev.so.14.0 00:02:31.516 SYMLINK libspdk_lvol.so 00:02:31.516 SYMLINK libspdk_bdev.so 00:02:31.775 CC lib/scsi/dev.o 00:02:31.775 CC lib/scsi/lun.o 00:02:31.775 CC lib/nbd/nbd.o 00:02:31.775 CC lib/scsi/port.o 00:02:31.775 CC lib/nbd/nbd_rpc.o 00:02:31.775 CC lib/scsi/scsi.o 00:02:31.775 CC lib/scsi/scsi_bdev.o 00:02:31.775 CC lib/ftl/ftl_core.o 00:02:31.775 CC lib/ublk/ublk.o 00:02:31.775 CC lib/nvmf/ctrlr.o 00:02:31.775 CC lib/ublk/ublk_rpc.o 00:02:32.034 CC lib/scsi/scsi_pr.o 00:02:32.034 CC lib/scsi/scsi_rpc.o 00:02:32.034 CC lib/ftl/ftl_init.o 00:02:32.034 CC lib/nvmf/ctrlr_discovery.o 00:02:32.034 CC lib/ftl/ftl_layout.o 00:02:32.034 CC lib/nvmf/ctrlr_bdev.o 00:02:32.034 CC lib/nvmf/subsystem.o 00:02:32.293 LIB libspdk_nbd.a 00:02:32.293 CC lib/ftl/ftl_debug.o 00:02:32.293 CC lib/ftl/ftl_io.o 00:02:32.293 CC lib/scsi/task.o 00:02:32.293 SO libspdk_nbd.so.6.0 00:02:32.293 LIB libspdk_ublk.a 00:02:32.293 SYMLINK libspdk_nbd.so 00:02:32.293 CC lib/ftl/ftl_sb.o 00:02:32.293 SO libspdk_ublk.so.2.0 00:02:32.551 CC lib/ftl/ftl_l2p.o 00:02:32.551 LIB libspdk_scsi.a 00:02:32.551 SYMLINK libspdk_ublk.so 00:02:32.551 CC lib/nvmf/nvmf.o 00:02:32.551 CC lib/nvmf/nvmf_rpc.o 00:02:32.551 CC lib/nvmf/transport.o 00:02:32.551 CC lib/nvmf/tcp.o 00:02:32.551 SO libspdk_scsi.so.8.0 00:02:32.551 CC lib/ftl/ftl_l2p_flat.o 00:02:32.551 SYMLINK libspdk_scsi.so 00:02:32.551 CC lib/ftl/ftl_nv_cache.o 00:02:32.809 CC lib/nvmf/vfio_user.o 00:02:32.809 CC lib/iscsi/conn.o 00:02:32.809 CC lib/iscsi/init_grp.o 00:02:33.068 CC lib/nvmf/rdma.o 00:02:33.068 CC lib/iscsi/iscsi.o 00:02:33.326 CC lib/vhost/vhost.o 00:02:33.326 CC lib/vhost/vhost_rpc.o 00:02:33.326 CC lib/iscsi/md5.o 00:02:33.326 CC lib/vhost/vhost_scsi.o 00:02:33.326 CC lib/ftl/ftl_band.o 00:02:33.584 CC lib/ftl/ftl_band_ops.o 00:02:33.584 CC lib/ftl/ftl_writer.o 00:02:33.843 CC lib/iscsi/param.o 00:02:33.843 CC lib/vhost/vhost_blk.o 00:02:33.843 CC lib/ftl/ftl_rq.o 00:02:33.843 CC lib/vhost/rte_vhost_user.o 00:02:33.843 CC lib/ftl/ftl_reloc.o 00:02:34.101 CC lib/ftl/ftl_l2p_cache.o 00:02:34.101 CC lib/ftl/ftl_p2l.o 00:02:34.101 CC lib/iscsi/portal_grp.o 00:02:34.359 CC lib/ftl/mngt/ftl_mngt.o 00:02:34.359 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:34.359 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:34.359 CC lib/iscsi/tgt_node.o 00:02:34.359 CC lib/iscsi/iscsi_subsystem.o 00:02:34.359 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:34.618 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:34.618 CC lib/iscsi/iscsi_rpc.o 00:02:34.618 CC lib/iscsi/task.o 00:02:34.618 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:34.618 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:34.876 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:34.876 LIB libspdk_iscsi.a 00:02:34.876 CC lib/ftl/utils/ftl_conf.o 00:02:34.876 CC lib/ftl/utils/ftl_md.o 00:02:34.876 LIB libspdk_vhost.a 00:02:34.876 SO libspdk_iscsi.so.7.0 00:02:35.135 CC lib/ftl/utils/ftl_mempool.o 00:02:35.135 CC lib/ftl/utils/ftl_bitmap.o 00:02:35.135 SO libspdk_vhost.so.7.1 00:02:35.135 CC lib/ftl/utils/ftl_property.o 00:02:35.135 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:35.135 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:35.135 SYMLINK libspdk_iscsi.so 00:02:35.135 SYMLINK libspdk_vhost.so 00:02:35.135 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:35.135 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:35.135 LIB libspdk_nvmf.a 00:02:35.135 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:35.135 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:35.135 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:35.451 SO libspdk_nvmf.so.17.0 00:02:35.451 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:35.451 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:35.451 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:35.451 CC lib/ftl/base/ftl_base_dev.o 00:02:35.451 CC lib/ftl/base/ftl_base_bdev.o 00:02:35.451 CC lib/ftl/ftl_trace.o 00:02:35.451 SYMLINK libspdk_nvmf.so 00:02:35.739 LIB libspdk_ftl.a 00:02:35.998 SO libspdk_ftl.so.8.0 00:02:36.257 SYMLINK libspdk_ftl.so 00:02:36.515 CC module/vfu_device/vfu_virtio.o 00:02:36.515 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.515 CC module/accel/dsa/accel_dsa.o 00:02:36.515 CC module/blob/bdev/blob_bdev.o 00:02:36.515 CC module/accel/error/accel_error.o 00:02:36.515 CC module/accel/iaa/accel_iaa.o 00:02:36.515 CC module/accel/ioat/accel_ioat.o 00:02:36.515 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.515 CC module/sock/posix/posix.o 00:02:36.515 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.774 LIB libspdk_env_dpdk_rpc.a 00:02:36.774 SO libspdk_env_dpdk_rpc.so.5.0 00:02:36.774 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.774 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:36.774 CC module/accel/error/accel_error_rpc.o 00:02:36.774 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.774 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.774 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.774 LIB libspdk_scheduler_dynamic.a 00:02:36.774 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.774 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.774 CC module/vfu_device/vfu_virtio_blk.o 00:02:36.774 SO libspdk_scheduler_dynamic.so.3.0 00:02:37.033 LIB libspdk_blob_bdev.a 00:02:37.033 SYMLINK libspdk_scheduler_dynamic.so 00:02:37.033 SO libspdk_blob_bdev.so.10.1 00:02:37.033 LIB libspdk_accel_error.a 00:02:37.033 CC module/vfu_device/vfu_virtio_scsi.o 00:02:37.033 CC module/scheduler/gscheduler/gscheduler.o 00:02:37.033 LIB libspdk_accel_ioat.a 00:02:37.033 LIB libspdk_accel_iaa.a 00:02:37.033 SO libspdk_accel_error.so.1.0 00:02:37.033 LIB libspdk_accel_dsa.a 00:02:37.033 SO libspdk_accel_iaa.so.2.0 00:02:37.033 SO libspdk_accel_ioat.so.5.0 00:02:37.033 SYMLINK libspdk_blob_bdev.so 00:02:37.033 SO libspdk_accel_dsa.so.4.0 00:02:37.033 CC module/vfu_device/vfu_virtio_rpc.o 00:02:37.033 SYMLINK libspdk_accel_error.so 00:02:37.033 SYMLINK libspdk_accel_ioat.so 00:02:37.033 SYMLINK libspdk_accel_iaa.so 00:02:37.033 SYMLINK libspdk_accel_dsa.so 00:02:37.033 LIB libspdk_scheduler_gscheduler.a 00:02:37.033 SO libspdk_scheduler_gscheduler.so.3.0 00:02:37.292 SYMLINK libspdk_scheduler_gscheduler.so 00:02:37.292 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.292 CC module/bdev/delay/vbdev_delay.o 00:02:37.292 CC module/bdev/error/vbdev_error.o 00:02:37.292 CC module/bdev/gpt/gpt.o 00:02:37.292 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.292 CC module/bdev/malloc/bdev_malloc.o 00:02:37.292 LIB libspdk_vfu_device.a 00:02:37.292 CC module/bdev/null/bdev_null.o 00:02:37.292 SO libspdk_vfu_device.so.2.0 00:02:37.292 CC module/bdev/nvme/bdev_nvme.o 00:02:37.292 LIB libspdk_sock_posix.a 00:02:37.551 SO libspdk_sock_posix.so.5.0 00:02:37.551 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.551 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.551 SYMLINK libspdk_vfu_device.so 00:02:37.551 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.551 SYMLINK libspdk_sock_posix.so 00:02:37.551 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.551 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.551 CC module/bdev/null/bdev_null_rpc.o 00:02:37.551 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.551 LIB libspdk_blobfs_bdev.a 00:02:37.551 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.551 SO libspdk_blobfs_bdev.so.5.0 00:02:37.810 LIB libspdk_bdev_gpt.a 00:02:37.810 SYMLINK libspdk_blobfs_bdev.so 00:02:37.810 LIB libspdk_bdev_error.a 00:02:37.810 SO libspdk_bdev_gpt.so.5.0 00:02:37.810 LIB libspdk_bdev_lvol.a 00:02:37.810 SO libspdk_bdev_error.so.5.0 00:02:37.810 LIB libspdk_bdev_delay.a 00:02:37.810 LIB libspdk_bdev_malloc.a 00:02:37.810 SO libspdk_bdev_lvol.so.5.0 00:02:37.810 SYMLINK libspdk_bdev_gpt.so 00:02:37.810 SO libspdk_bdev_delay.so.5.0 00:02:37.810 SO libspdk_bdev_malloc.so.5.0 00:02:37.810 LIB libspdk_bdev_null.a 00:02:37.810 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.810 SYMLINK libspdk_bdev_error.so 00:02:37.810 CC module/bdev/nvme/nvme_rpc.o 00:02:37.810 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.810 SYMLINK libspdk_bdev_lvol.so 00:02:37.810 CC module/bdev/raid/bdev_raid.o 00:02:37.810 SO libspdk_bdev_null.so.5.0 00:02:37.810 SYMLINK libspdk_bdev_delay.so 00:02:37.810 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.810 SYMLINK libspdk_bdev_malloc.so 00:02:38.068 SYMLINK libspdk_bdev_null.so 00:02:38.068 CC module/bdev/split/vbdev_split.o 00:02:38.068 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:38.068 CC module/bdev/aio/bdev_aio.o 00:02:38.068 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.068 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:38.068 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:38.326 CC module/bdev/split/vbdev_split_rpc.o 00:02:38.326 CC module/bdev/nvme/vbdev_opal.o 00:02:38.326 CC module/bdev/ftl/bdev_ftl.o 00:02:38.326 LIB libspdk_bdev_passthru.a 00:02:38.326 SO libspdk_bdev_passthru.so.5.0 00:02:38.326 LIB libspdk_bdev_zone_block.a 00:02:38.326 LIB libspdk_bdev_split.a 00:02:38.326 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.326 LIB libspdk_bdev_aio.a 00:02:38.326 SO libspdk_bdev_zone_block.so.5.0 00:02:38.326 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.326 SYMLINK libspdk_bdev_passthru.so 00:02:38.326 SO libspdk_bdev_split.so.5.0 00:02:38.326 SO libspdk_bdev_aio.so.5.0 00:02:38.584 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.584 SYMLINK libspdk_bdev_zone_block.so 00:02:38.584 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.584 SYMLINK libspdk_bdev_aio.so 00:02:38.584 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.584 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:38.584 SYMLINK libspdk_bdev_split.so 00:02:38.584 CC module/bdev/raid/bdev_raid_sb.o 00:02:38.584 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.584 CC module/bdev/raid/raid0.o 00:02:38.842 LIB libspdk_bdev_ftl.a 00:02:38.842 CC module/bdev/raid/raid1.o 00:02:38.842 CC module/bdev/raid/concat.o 00:02:38.842 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.842 SO libspdk_bdev_ftl.so.5.0 00:02:38.842 SYMLINK libspdk_bdev_ftl.so 00:02:38.842 LIB libspdk_bdev_iscsi.a 00:02:38.842 SO libspdk_bdev_iscsi.so.5.0 00:02:38.842 LIB libspdk_bdev_virtio.a 00:02:38.842 SYMLINK libspdk_bdev_iscsi.so 00:02:38.842 SO libspdk_bdev_virtio.so.5.0 00:02:39.100 LIB libspdk_bdev_raid.a 00:02:39.100 SYMLINK libspdk_bdev_virtio.so 00:02:39.100 SO libspdk_bdev_raid.so.5.0 00:02:39.100 SYMLINK libspdk_bdev_raid.so 00:02:39.667 LIB libspdk_bdev_nvme.a 00:02:39.667 SO libspdk_bdev_nvme.so.6.0 00:02:39.950 SYMLINK libspdk_bdev_nvme.so 00:02:40.216 CC module/event/subsystems/sock/sock.o 00:02:40.216 CC module/event/subsystems/vmd/vmd.o 00:02:40.216 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.216 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.216 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.216 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.216 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.216 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:40.216 LIB libspdk_event_vhost_blk.a 00:02:40.216 LIB libspdk_event_sock.a 00:02:40.216 LIB libspdk_event_scheduler.a 00:02:40.216 LIB libspdk_event_vfu_tgt.a 00:02:40.216 SO libspdk_event_vhost_blk.so.2.0 00:02:40.216 SO libspdk_event_sock.so.4.0 00:02:40.216 LIB libspdk_event_iobuf.a 00:02:40.216 LIB libspdk_event_vmd.a 00:02:40.216 SO libspdk_event_scheduler.so.3.0 00:02:40.216 SO libspdk_event_vfu_tgt.so.2.0 00:02:40.475 SO libspdk_event_iobuf.so.2.0 00:02:40.475 SYMLINK libspdk_event_sock.so 00:02:40.475 SYMLINK libspdk_event_vhost_blk.so 00:02:40.475 SO libspdk_event_vmd.so.5.0 00:02:40.475 SYMLINK libspdk_event_scheduler.so 00:02:40.475 SYMLINK libspdk_event_vfu_tgt.so 00:02:40.475 SYMLINK libspdk_event_iobuf.so 00:02:40.475 SYMLINK libspdk_event_vmd.so 00:02:40.475 CC module/event/subsystems/accel/accel.o 00:02:40.734 LIB libspdk_event_accel.a 00:02:40.734 SO libspdk_event_accel.so.5.0 00:02:40.992 SYMLINK libspdk_event_accel.so 00:02:40.992 CC module/event/subsystems/bdev/bdev.o 00:02:41.251 LIB libspdk_event_bdev.a 00:02:41.251 SO libspdk_event_bdev.so.5.0 00:02:41.251 SYMLINK libspdk_event_bdev.so 00:02:41.510 CC module/event/subsystems/scsi/scsi.o 00:02:41.510 CC module/event/subsystems/nbd/nbd.o 00:02:41.510 CC module/event/subsystems/ublk/ublk.o 00:02:41.510 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.510 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.769 LIB libspdk_event_ublk.a 00:02:41.769 LIB libspdk_event_nbd.a 00:02:41.769 SO libspdk_event_ublk.so.2.0 00:02:41.769 LIB libspdk_event_scsi.a 00:02:41.769 SO libspdk_event_nbd.so.5.0 00:02:41.769 SO libspdk_event_scsi.so.5.0 00:02:41.769 SYMLINK libspdk_event_ublk.so 00:02:41.769 LIB libspdk_event_nvmf.a 00:02:41.769 SYMLINK libspdk_event_nbd.so 00:02:41.769 SYMLINK libspdk_event_scsi.so 00:02:41.769 SO libspdk_event_nvmf.so.5.0 00:02:42.028 SYMLINK libspdk_event_nvmf.so 00:02:42.028 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.028 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.287 LIB libspdk_event_vhost_scsi.a 00:02:42.287 LIB libspdk_event_iscsi.a 00:02:42.287 SO libspdk_event_vhost_scsi.so.2.0 00:02:42.287 SO libspdk_event_iscsi.so.5.0 00:02:42.287 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.287 SYMLINK libspdk_event_iscsi.so 00:02:42.287 SO libspdk.so.5.0 00:02:42.546 SYMLINK libspdk.so 00:02:42.546 TEST_HEADER include/spdk/accel.h 00:02:42.546 CC app/trace_record/trace_record.o 00:02:42.546 CXX app/trace/trace.o 00:02:42.546 TEST_HEADER include/spdk/accel_module.h 00:02:42.546 TEST_HEADER include/spdk/assert.h 00:02:42.546 TEST_HEADER include/spdk/barrier.h 00:02:42.546 TEST_HEADER include/spdk/base64.h 00:02:42.546 TEST_HEADER include/spdk/bdev.h 00:02:42.546 TEST_HEADER include/spdk/bdev_module.h 00:02:42.546 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.546 TEST_HEADER include/spdk/bit_array.h 00:02:42.546 TEST_HEADER include/spdk/bit_pool.h 00:02:42.546 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.546 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.546 TEST_HEADER include/spdk/blobfs.h 00:02:42.546 TEST_HEADER include/spdk/blob.h 00:02:42.546 TEST_HEADER include/spdk/conf.h 00:02:42.546 TEST_HEADER include/spdk/config.h 00:02:42.546 TEST_HEADER include/spdk/cpuset.h 00:02:42.546 TEST_HEADER include/spdk/crc16.h 00:02:42.546 TEST_HEADER include/spdk/crc32.h 00:02:42.546 TEST_HEADER include/spdk/crc64.h 00:02:42.546 TEST_HEADER include/spdk/dif.h 00:02:42.546 TEST_HEADER include/spdk/dma.h 00:02:42.546 TEST_HEADER include/spdk/endian.h 00:02:42.546 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.546 TEST_HEADER include/spdk/env.h 00:02:42.546 TEST_HEADER include/spdk/event.h 00:02:42.546 TEST_HEADER include/spdk/fd_group.h 00:02:42.546 CC examples/accel/perf/accel_perf.o 00:02:42.546 TEST_HEADER include/spdk/fd.h 00:02:42.546 TEST_HEADER include/spdk/file.h 00:02:42.546 TEST_HEADER include/spdk/ftl.h 00:02:42.546 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.546 TEST_HEADER include/spdk/hexlify.h 00:02:42.546 TEST_HEADER include/spdk/histogram_data.h 00:02:42.805 TEST_HEADER include/spdk/idxd.h 00:02:42.805 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.805 TEST_HEADER include/spdk/init.h 00:02:42.805 TEST_HEADER include/spdk/ioat.h 00:02:42.805 TEST_HEADER include/spdk/ioat_spec.h 00:02:42.805 CC test/accel/dif/dif.o 00:02:42.805 CC test/bdev/bdevio/bdevio.o 00:02:42.805 TEST_HEADER include/spdk/iscsi_spec.h 00:02:42.805 CC test/dma/test_dma/test_dma.o 00:02:42.805 TEST_HEADER include/spdk/json.h 00:02:42.805 CC test/blobfs/mkfs/mkfs.o 00:02:42.805 TEST_HEADER include/spdk/jsonrpc.h 00:02:42.805 TEST_HEADER include/spdk/likely.h 00:02:42.805 TEST_HEADER include/spdk/log.h 00:02:42.805 TEST_HEADER include/spdk/lvol.h 00:02:42.805 TEST_HEADER include/spdk/memory.h 00:02:42.805 CC test/app/bdev_svc/bdev_svc.o 00:02:42.805 TEST_HEADER include/spdk/mmio.h 00:02:42.805 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.805 TEST_HEADER include/spdk/nbd.h 00:02:42.805 TEST_HEADER include/spdk/notify.h 00:02:42.805 TEST_HEADER include/spdk/nvme.h 00:02:42.805 TEST_HEADER include/spdk/nvme_intel.h 00:02:42.805 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:42.805 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:42.805 TEST_HEADER include/spdk/nvme_spec.h 00:02:42.805 TEST_HEADER include/spdk/nvme_zns.h 00:02:42.805 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:42.805 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:42.805 TEST_HEADER include/spdk/nvmf.h 00:02:42.805 TEST_HEADER include/spdk/nvmf_spec.h 00:02:42.805 TEST_HEADER include/spdk/nvmf_transport.h 00:02:42.805 TEST_HEADER include/spdk/opal.h 00:02:42.805 TEST_HEADER include/spdk/opal_spec.h 00:02:42.805 TEST_HEADER include/spdk/pci_ids.h 00:02:42.805 TEST_HEADER include/spdk/pipe.h 00:02:42.805 TEST_HEADER include/spdk/queue.h 00:02:42.805 TEST_HEADER include/spdk/reduce.h 00:02:42.805 TEST_HEADER include/spdk/rpc.h 00:02:42.805 TEST_HEADER include/spdk/scheduler.h 00:02:42.805 TEST_HEADER include/spdk/scsi.h 00:02:42.805 TEST_HEADER include/spdk/scsi_spec.h 00:02:42.805 TEST_HEADER include/spdk/sock.h 00:02:42.805 TEST_HEADER include/spdk/stdinc.h 00:02:42.805 TEST_HEADER include/spdk/string.h 00:02:42.805 TEST_HEADER include/spdk/thread.h 00:02:42.805 TEST_HEADER include/spdk/trace.h 00:02:42.805 TEST_HEADER include/spdk/trace_parser.h 00:02:42.805 TEST_HEADER include/spdk/tree.h 00:02:42.805 TEST_HEADER include/spdk/ublk.h 00:02:42.805 TEST_HEADER include/spdk/util.h 00:02:42.805 LINK spdk_trace_record 00:02:42.805 TEST_HEADER include/spdk/uuid.h 00:02:42.805 TEST_HEADER include/spdk/version.h 00:02:42.805 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:42.805 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:42.805 TEST_HEADER include/spdk/vhost.h 00:02:42.805 TEST_HEADER include/spdk/vmd.h 00:02:42.805 TEST_HEADER include/spdk/xor.h 00:02:42.805 TEST_HEADER include/spdk/zipf.h 00:02:42.805 CXX test/cpp_headers/accel.o 00:02:43.064 LINK mkfs 00:02:43.064 LINK bdev_svc 00:02:43.064 CXX test/cpp_headers/accel_module.o 00:02:43.064 LINK spdk_trace 00:02:43.064 LINK dif 00:02:43.064 LINK test_dma 00:02:43.064 LINK accel_perf 00:02:43.064 LINK bdevio 00:02:43.323 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.323 CXX test/cpp_headers/assert.o 00:02:43.323 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.323 CC test/env/vtophys/vtophys.o 00:02:43.323 CC app/nvmf_tgt/nvmf_main.o 00:02:43.581 LINK mem_callbacks 00:02:43.581 LINK hello_bdev 00:02:43.581 CXX test/cpp_headers/barrier.o 00:02:43.581 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.581 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.581 CC examples/blob/hello_world/hello_blob.o 00:02:43.581 CC app/spdk_tgt/spdk_tgt.o 00:02:43.581 LINK vtophys 00:02:43.581 CXX test/cpp_headers/base64.o 00:02:43.581 LINK nvmf_tgt 00:02:43.581 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.581 LINK iscsi_tgt 00:02:43.840 CC test/env/memory/memory_ut.o 00:02:43.840 LINK spdk_tgt 00:02:43.840 LINK hello_blob 00:02:43.840 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.840 CXX test/cpp_headers/bdev.o 00:02:43.840 LINK env_dpdk_post_init 00:02:43.840 LINK nvme_fuzz 00:02:43.840 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.098 CXX test/cpp_headers/bdev_module.o 00:02:44.098 CC test/env/pci/pci_ut.o 00:02:44.098 CC app/spdk_lspci/spdk_lspci.o 00:02:44.098 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.098 LINK bdevperf 00:02:44.098 CC examples/blob/cli/blobcli.o 00:02:44.098 CC test/event/event_perf/event_perf.o 00:02:44.098 LINK spdk_lspci 00:02:44.098 CC test/lvol/esnap/esnap.o 00:02:44.098 CXX test/cpp_headers/bdev_zone.o 00:02:44.358 LINK event_perf 00:02:44.358 CXX test/cpp_headers/bit_array.o 00:02:44.358 LINK pci_ut 00:02:44.358 CC app/spdk_nvme_perf/perf.o 00:02:44.358 LINK vhost_fuzz 00:02:44.358 CC test/event/reactor/reactor.o 00:02:44.358 CC app/spdk_nvme_identify/identify.o 00:02:44.358 CXX test/cpp_headers/bit_pool.o 00:02:44.617 LINK blobcli 00:02:44.617 LINK reactor 00:02:44.617 CXX test/cpp_headers/blob_bdev.o 00:02:44.617 CXX test/cpp_headers/blobfs_bdev.o 00:02:44.617 LINK memory_ut 00:02:44.617 CC test/event/reactor_perf/reactor_perf.o 00:02:44.876 CC test/event/app_repeat/app_repeat.o 00:02:44.876 CXX test/cpp_headers/blobfs.o 00:02:44.876 LINK reactor_perf 00:02:44.876 CC examples/ioat/perf/perf.o 00:02:44.876 CC test/event/scheduler/scheduler.o 00:02:44.876 LINK app_repeat 00:02:44.876 CC test/nvme/aer/aer.o 00:02:45.135 CXX test/cpp_headers/blob.o 00:02:45.135 CC test/rpc_client/rpc_client_test.o 00:02:45.135 LINK ioat_perf 00:02:45.135 LINK scheduler 00:02:45.135 LINK spdk_nvme_perf 00:02:45.135 CXX test/cpp_headers/conf.o 00:02:45.135 LINK spdk_nvme_identify 00:02:45.135 CC test/thread/poller_perf/poller_perf.o 00:02:45.405 LINK rpc_client_test 00:02:45.405 LINK aer 00:02:45.405 CC examples/ioat/verify/verify.o 00:02:45.405 CXX test/cpp_headers/config.o 00:02:45.405 CXX test/cpp_headers/cpuset.o 00:02:45.405 LINK poller_perf 00:02:45.405 CC test/nvme/reset/reset.o 00:02:45.405 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.405 CC test/nvme/sgl/sgl.o 00:02:45.405 CC examples/nvme/hello_world/hello_world.o 00:02:45.405 LINK iscsi_fuzz 00:02:45.663 CC test/nvme/e2edp/nvme_dp.o 00:02:45.663 LINK verify 00:02:45.663 CXX test/cpp_headers/crc16.o 00:02:45.663 CC test/nvme/overhead/overhead.o 00:02:45.663 LINK spdk_nvme_discover 00:02:45.663 LINK reset 00:02:45.663 CXX test/cpp_headers/crc32.o 00:02:45.663 LINK hello_world 00:02:45.663 LINK sgl 00:02:45.921 CC examples/sock/hello_world/hello_sock.o 00:02:45.921 CC test/app/histogram_perf/histogram_perf.o 00:02:45.921 LINK nvme_dp 00:02:45.921 CC app/spdk_top/spdk_top.o 00:02:45.921 CC test/app/jsoncat/jsoncat.o 00:02:45.921 CXX test/cpp_headers/crc64.o 00:02:45.921 LINK overhead 00:02:45.921 CC examples/nvme/reconnect/reconnect.o 00:02:45.921 LINK histogram_perf 00:02:45.921 CXX test/cpp_headers/dif.o 00:02:45.921 CC app/vhost/vhost.o 00:02:46.178 LINK hello_sock 00:02:46.178 LINK jsoncat 00:02:46.178 CXX test/cpp_headers/dma.o 00:02:46.178 CC test/nvme/err_injection/err_injection.o 00:02:46.178 CC test/nvme/startup/startup.o 00:02:46.178 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.178 LINK vhost 00:02:46.178 CC test/app/stub/stub.o 00:02:46.436 CXX test/cpp_headers/endian.o 00:02:46.436 LINK err_injection 00:02:46.436 CC examples/nvmf/nvmf/nvmf.o 00:02:46.436 LINK startup 00:02:46.436 LINK lsvmd 00:02:46.436 LINK reconnect 00:02:46.436 LINK stub 00:02:46.436 CXX test/cpp_headers/env_dpdk.o 00:02:46.694 CC examples/vmd/led/led.o 00:02:46.694 CC test/nvme/reserve/reserve.o 00:02:46.694 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.694 CC examples/util/zipf/zipf.o 00:02:46.694 CXX test/cpp_headers/env.o 00:02:46.694 LINK nvmf 00:02:46.694 CC examples/thread/thread/thread_ex.o 00:02:46.694 LINK spdk_top 00:02:46.694 CC app/spdk_dd/spdk_dd.o 00:02:46.694 LINK led 00:02:46.952 LINK zipf 00:02:46.952 LINK reserve 00:02:46.952 CXX test/cpp_headers/event.o 00:02:46.952 CXX test/cpp_headers/fd_group.o 00:02:46.952 CC app/fio/nvme/fio_plugin.o 00:02:46.952 LINK thread 00:02:47.209 CXX test/cpp_headers/fd.o 00:02:47.210 CC test/nvme/simple_copy/simple_copy.o 00:02:47.210 CC examples/idxd/perf/perf.o 00:02:47.210 CC test/nvme/connect_stress/connect_stress.o 00:02:47.210 LINK spdk_dd 00:02:47.210 LINK nvme_manage 00:02:47.210 CC examples/nvme/arbitration/arbitration.o 00:02:47.210 CXX test/cpp_headers/file.o 00:02:47.210 LINK connect_stress 00:02:47.468 LINK simple_copy 00:02:47.468 CC app/fio/bdev/fio_plugin.o 00:02:47.468 CXX test/cpp_headers/ftl.o 00:02:47.468 LINK idxd_perf 00:02:47.468 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.468 LINK arbitration 00:02:47.468 CC test/nvme/boot_partition/boot_partition.o 00:02:47.468 CC examples/nvme/hotplug/hotplug.o 00:02:47.726 CXX test/cpp_headers/gpt_spec.o 00:02:47.726 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.726 LINK interrupt_tgt 00:02:47.726 LINK spdk_nvme 00:02:47.726 LINK boot_partition 00:02:47.726 CC examples/nvme/abort/abort.o 00:02:47.726 CXX test/cpp_headers/hexlify.o 00:02:47.726 LINK hotplug 00:02:47.984 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:47.984 LINK cmb_copy 00:02:47.984 LINK spdk_bdev 00:02:47.984 CXX test/cpp_headers/histogram_data.o 00:02:47.984 CC test/nvme/compliance/nvme_compliance.o 00:02:47.984 CXX test/cpp_headers/idxd.o 00:02:47.984 CXX test/cpp_headers/idxd_spec.o 00:02:47.984 CXX test/cpp_headers/init.o 00:02:47.984 CXX test/cpp_headers/ioat.o 00:02:47.984 LINK pmr_persistence 00:02:47.984 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.242 LINK abort 00:02:48.242 CXX test/cpp_headers/ioat_spec.o 00:02:48.242 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.242 CXX test/cpp_headers/iscsi_spec.o 00:02:48.242 CXX test/cpp_headers/json.o 00:02:48.242 LINK nvme_compliance 00:02:48.242 CC test/nvme/fdp/fdp.o 00:02:48.242 CC test/nvme/cuse/cuse.o 00:02:48.242 CXX test/cpp_headers/jsonrpc.o 00:02:48.242 LINK fused_ordering 00:02:48.500 LINK doorbell_aers 00:02:48.500 CXX test/cpp_headers/likely.o 00:02:48.500 CXX test/cpp_headers/log.o 00:02:48.500 CXX test/cpp_headers/lvol.o 00:02:48.500 CXX test/cpp_headers/memory.o 00:02:48.500 CXX test/cpp_headers/mmio.o 00:02:48.758 CXX test/cpp_headers/nbd.o 00:02:48.758 CXX test/cpp_headers/notify.o 00:02:48.758 CXX test/cpp_headers/nvme.o 00:02:48.758 CXX test/cpp_headers/nvme_intel.o 00:02:48.758 LINK fdp 00:02:48.758 CXX test/cpp_headers/nvme_ocssd.o 00:02:48.758 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:48.758 CXX test/cpp_headers/nvme_spec.o 00:02:49.016 CXX test/cpp_headers/nvme_zns.o 00:02:49.016 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.016 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.016 CXX test/cpp_headers/nvmf.o 00:02:49.016 CXX test/cpp_headers/nvmf_spec.o 00:02:49.016 CXX test/cpp_headers/nvmf_transport.o 00:02:49.016 CXX test/cpp_headers/opal.o 00:02:49.016 LINK esnap 00:02:49.273 CXX test/cpp_headers/opal_spec.o 00:02:49.273 CXX test/cpp_headers/pci_ids.o 00:02:49.273 CXX test/cpp_headers/pipe.o 00:02:49.273 CXX test/cpp_headers/queue.o 00:02:49.273 CXX test/cpp_headers/reduce.o 00:02:49.273 CXX test/cpp_headers/rpc.o 00:02:49.273 CXX test/cpp_headers/scheduler.o 00:02:49.273 CXX test/cpp_headers/scsi.o 00:02:49.273 CXX test/cpp_headers/scsi_spec.o 00:02:49.273 CXX test/cpp_headers/sock.o 00:02:49.274 CXX test/cpp_headers/stdinc.o 00:02:49.545 CXX test/cpp_headers/string.o 00:02:49.545 CXX test/cpp_headers/thread.o 00:02:49.545 CXX test/cpp_headers/trace.o 00:02:49.545 CXX test/cpp_headers/trace_parser.o 00:02:49.545 LINK cuse 00:02:49.545 CXX test/cpp_headers/tree.o 00:02:49.545 CXX test/cpp_headers/ublk.o 00:02:49.545 CXX test/cpp_headers/util.o 00:02:49.545 CXX test/cpp_headers/uuid.o 00:02:49.545 CXX test/cpp_headers/version.o 00:02:49.545 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.545 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.545 CXX test/cpp_headers/vhost.o 00:02:49.814 CXX test/cpp_headers/vmd.o 00:02:49.814 CXX test/cpp_headers/xor.o 00:02:49.814 CXX test/cpp_headers/zipf.o 00:02:55.077 00:02:55.077 real 1m5.368s 00:02:55.077 user 6m48.832s 00:02:55.077 sys 1m35.310s 00:02:55.077 ************************************ 00:02:55.077 END TEST make 00:02:55.077 ************************************ 00:02:55.077 17:57:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:55.077 17:57:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.077 17:57:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:55.077 17:57:52 -- nvmf/common.sh@7 -- # uname -s 00:02:55.077 17:57:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.077 17:57:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.077 17:57:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.077 17:57:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.077 17:57:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.077 17:57:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.077 17:57:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.077 17:57:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.077 17:57:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.077 17:57:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.077 17:57:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:02:55.077 17:57:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:02:55.077 17:57:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.077 17:57:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.077 17:57:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:55.077 17:57:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:55.077 17:57:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.077 17:57:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.077 17:57:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.077 17:57:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.077 17:57:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.077 17:57:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.077 17:57:52 -- paths/export.sh@5 -- # export PATH 00:02:55.078 17:57:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.078 17:57:52 -- nvmf/common.sh@46 -- # : 0 00:02:55.078 17:57:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:55.078 17:57:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:55.078 17:57:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:55.078 17:57:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.078 17:57:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.078 17:57:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:55.078 17:57:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:55.078 17:57:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:55.078 17:57:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.078 17:57:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.078 17:57:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.078 17:57:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.078 17:57:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:55.078 17:57:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.078 17:57:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:55.078 17:57:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.078 17:57:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.078 17:57:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.078 17:57:52 -- spdk/autotest.sh@48 -- # udevadm_pid=49645 00:02:55.078 17:57:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.078 17:57:52 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:55.078 17:57:52 -- spdk/autotest.sh@54 -- # echo 49648 00:02:55.078 17:57:52 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:55.078 17:57:52 -- spdk/autotest.sh@56 -- # echo 49649 00:02:55.078 17:57:52 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:55.078 17:57:52 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:55.078 17:57:52 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.078 17:57:52 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:55.078 17:57:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:55.078 17:57:52 -- common/autotest_common.sh@10 -- # set +x 00:02:55.078 17:57:52 -- spdk/autotest.sh@70 -- # create_test_list 00:02:55.078 17:57:52 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:55.078 17:57:52 -- common/autotest_common.sh@10 -- # set +x 00:02:55.078 17:57:52 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:55.078 17:57:52 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:55.078 17:57:52 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:55.078 17:57:52 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:55.078 17:57:52 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:55.078 17:57:52 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:55.078 17:57:52 -- common/autotest_common.sh@1440 -- # uname 00:02:55.078 17:57:52 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:55.078 17:57:52 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:55.078 17:57:52 -- common/autotest_common.sh@1460 -- # uname 00:02:55.078 17:57:52 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:55.078 17:57:52 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:55.078 17:57:52 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:55.078 17:57:52 -- spdk/autotest.sh@83 -- # hash lcov 00:02:55.078 17:57:52 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:55.078 17:57:52 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:55.078 --rc lcov_branch_coverage=1 00:02:55.078 --rc lcov_function_coverage=1 00:02:55.078 --rc genhtml_branch_coverage=1 00:02:55.078 --rc genhtml_function_coverage=1 00:02:55.078 --rc genhtml_legend=1 00:02:55.078 --rc geninfo_all_blocks=1 00:02:55.078 ' 00:02:55.078 17:57:52 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:55.078 --rc lcov_branch_coverage=1 00:02:55.078 --rc lcov_function_coverage=1 00:02:55.078 --rc genhtml_branch_coverage=1 00:02:55.078 --rc genhtml_function_coverage=1 00:02:55.078 --rc genhtml_legend=1 00:02:55.078 --rc geninfo_all_blocks=1 00:02:55.078 ' 00:02:55.078 17:57:52 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:55.078 --rc lcov_branch_coverage=1 00:02:55.078 --rc lcov_function_coverage=1 00:02:55.078 --rc genhtml_branch_coverage=1 00:02:55.078 --rc genhtml_function_coverage=1 00:02:55.078 --rc genhtml_legend=1 00:02:55.078 --rc geninfo_all_blocks=1 00:02:55.078 --no-external' 00:02:55.078 17:57:52 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:55.078 --rc lcov_branch_coverage=1 00:02:55.078 --rc lcov_function_coverage=1 00:02:55.078 --rc genhtml_branch_coverage=1 00:02:55.078 --rc genhtml_function_coverage=1 00:02:55.078 --rc genhtml_legend=1 00:02:55.078 --rc geninfo_all_blocks=1 00:02:55.078 --no-external' 00:02:55.078 17:57:52 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:55.078 lcov: LCOV version 1.14 00:02:55.078 17:57:52 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:03.186 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:03.186 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:03.186 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:03.186 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:03.186 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:03.186 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:25.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:25.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:25.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:25.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:25.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:25.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:25.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:25.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:25.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:25.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:26.041 17:58:23 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:26.041 17:58:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:26.041 17:58:23 -- common/autotest_common.sh@10 -- # set +x 00:03:26.041 17:58:23 -- spdk/autotest.sh@102 -- # rm -f 00:03:26.041 17:58:23 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.865 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:26.865 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:26.865 17:58:24 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:26.865 17:58:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:26.865 17:58:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:26.865 17:58:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:26.865 17:58:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.865 17:58:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:26.865 17:58:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:26.865 17:58:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.865 17:58:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.865 17:58:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.865 17:58:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n2 00:03:26.865 17:58:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n2 00:03:26.865 17:58:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:26.866 17:58:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.866 17:58:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.866 17:58:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n3 00:03:26.866 17:58:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n3 00:03:26.866 17:58:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:26.866 17:58:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.866 17:58:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:26.866 17:58:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:26.866 17:58:24 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:26.866 17:58:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.866 17:58:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:26.866 17:58:24 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:26.866 17:58:24 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1 00:03:26.866 17:58:24 -- spdk/autotest.sh@121 -- # grep -v p 00:03:26.866 17:58:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.866 17:58:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.866 17:58:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:26.866 17:58:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:26.866 17:58:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.866 No valid GPT data, bailing 00:03:26.866 17:58:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.866 17:58:24 -- scripts/common.sh@393 -- # pt= 00:03:26.866 17:58:24 -- scripts/common.sh@394 -- # return 1 00:03:26.866 17:58:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.866 1+0 records in 00:03:26.866 1+0 records out 00:03:26.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360587 s, 291 MB/s 00:03:26.866 17:58:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.866 17:58:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.866 17:58:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n2 00:03:26.866 17:58:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt 00:03:26.866 17:58:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:03:26.866 No valid GPT data, bailing 00:03:26.866 17:58:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:26.866 17:58:24 -- scripts/common.sh@393 -- # pt= 00:03:26.866 17:58:24 -- scripts/common.sh@394 -- # return 1 00:03:26.866 17:58:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:03:26.866 1+0 records in 00:03:26.866 1+0 records out 00:03:26.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391752 s, 268 MB/s 00:03:26.866 17:58:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:26.866 17:58:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:26.866 17:58:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n3 00:03:26.866 17:58:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt 00:03:26.866 17:58:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:03:27.124 No valid GPT data, bailing 00:03:27.124 17:58:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:27.124 17:58:24 -- scripts/common.sh@393 -- # pt= 00:03:27.124 17:58:24 -- scripts/common.sh@394 -- # return 1 00:03:27.124 17:58:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:03:27.125 1+0 records in 00:03:27.125 1+0 records out 00:03:27.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418342 s, 251 MB/s 00:03:27.125 17:58:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.125 17:58:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.125 17:58:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:27.125 17:58:24 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:27.125 17:58:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:27.125 No valid GPT data, bailing 00:03:27.125 17:58:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:27.125 17:58:24 -- scripts/common.sh@393 -- # pt= 00:03:27.125 17:58:24 -- scripts/common.sh@394 -- # return 1 00:03:27.125 17:58:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:27.125 1+0 records in 00:03:27.125 1+0 records out 00:03:27.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392685 s, 267 MB/s 00:03:27.125 17:58:24 -- spdk/autotest.sh@129 -- # sync 00:03:27.125 17:58:24 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.125 17:58:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.125 17:58:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.027 17:58:26 -- spdk/autotest.sh@135 -- # uname -s 00:03:29.027 17:58:26 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:29.027 17:58:26 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.027 17:58:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.027 17:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.027 17:58:26 -- common/autotest_common.sh@10 -- # set +x 00:03:29.027 ************************************ 00:03:29.027 START TEST setup.sh 00:03:29.027 ************************************ 00:03:29.027 17:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.027 * Looking for test storage... 00:03:29.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.027 17:58:26 -- setup/test-setup.sh@10 -- # uname -s 00:03:29.027 17:58:26 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:29.027 17:58:26 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.027 17:58:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.027 17:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.027 17:58:26 -- common/autotest_common.sh@10 -- # set +x 00:03:29.027 ************************************ 00:03:29.027 START TEST acl 00:03:29.027 ************************************ 00:03:29.027 17:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.027 * Looking for test storage... 00:03:29.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.027 17:58:26 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:29.027 17:58:26 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:29.027 17:58:26 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:29.027 17:58:26 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:29.027 17:58:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.027 17:58:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:29.027 17:58:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:29.028 17:58:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.028 17:58:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n2 00:03:29.028 17:58:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n2 00:03:29.028 17:58:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.028 17:58:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n3 00:03:29.028 17:58:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n3 00:03:29.028 17:58:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:29.028 17:58:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:29.028 17:58:26 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:29.028 17:58:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:29.028 17:58:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:29.028 17:58:26 -- setup/acl.sh@12 -- # devs=() 00:03:29.028 17:58:26 -- setup/acl.sh@12 -- # declare -a devs 00:03:29.028 17:58:26 -- setup/acl.sh@13 -- # drivers=() 00:03:29.028 17:58:26 -- setup/acl.sh@13 -- # declare -A drivers 00:03:29.028 17:58:26 -- setup/acl.sh@51 -- # setup reset 00:03:29.028 17:58:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.028 17:58:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.593 17:58:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:29.593 17:58:27 -- setup/acl.sh@16 -- # local dev driver 00:03:29.593 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.593 17:58:27 -- setup/acl.sh@15 -- # setup output status 00:03:29.593 17:58:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.593 17:58:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:29.851 Hugepages 00:03:29.851 node hugesize free / total 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # continue 00:03:29.851 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.851 00:03:29.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # continue 00:03:29.851 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:29.851 17:58:27 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:29.851 17:58:27 -- setup/acl.sh@20 -- # continue 00:03:29.851 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.851 17:58:27 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:29.851 17:58:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.851 17:58:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:29.851 17:58:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.851 17:58:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.851 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.150 17:58:27 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:30.150 17:58:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:30.150 17:58:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:30.150 17:58:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:30.150 17:58:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:30.150 17:58:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.150 17:58:27 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:30.150 17:58:27 -- setup/acl.sh@54 -- # run_test denied denied 00:03:30.150 17:58:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.150 17:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.150 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:03:30.150 ************************************ 00:03:30.150 START TEST denied 00:03:30.150 ************************************ 00:03:30.150 17:58:27 -- common/autotest_common.sh@1104 -- # denied 00:03:30.150 17:58:27 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:30.150 17:58:27 -- setup/acl.sh@38 -- # setup output config 00:03:30.150 17:58:27 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:30.150 17:58:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.150 17:58:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.084 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:31.084 17:58:28 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:31.084 17:58:28 -- setup/acl.sh@28 -- # local dev driver 00:03:31.084 17:58:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.084 17:58:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:31.084 17:58:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:31.084 17:58:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.084 17:58:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.084 17:58:28 -- setup/acl.sh@41 -- # setup reset 00:03:31.084 17:58:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.084 17:58:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:31.342 00:03:31.342 real 0m1.423s 00:03:31.342 user 0m0.576s 00:03:31.342 sys 0m0.806s 00:03:31.342 17:58:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.342 17:58:29 -- common/autotest_common.sh@10 -- # set +x 00:03:31.342 ************************************ 00:03:31.342 END TEST denied 00:03:31.342 ************************************ 00:03:31.601 17:58:29 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.601 17:58:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.601 17:58:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.601 17:58:29 -- common/autotest_common.sh@10 -- # set +x 00:03:31.601 ************************************ 00:03:31.601 START TEST allowed 00:03:31.601 ************************************ 00:03:31.601 17:58:29 -- common/autotest_common.sh@1104 -- # allowed 00:03:31.601 17:58:29 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:31.601 17:58:29 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:31.601 17:58:29 -- setup/acl.sh@45 -- # setup output config 00:03:31.601 17:58:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.601 17:58:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.167 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.167 17:58:30 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:32.167 17:58:30 -- setup/acl.sh@28 -- # local dev driver 00:03:32.167 17:58:30 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.167 17:58:30 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:32.167 17:58:30 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:32.167 17:58:30 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.167 17:58:30 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.167 17:58:30 -- setup/acl.sh@48 -- # setup reset 00:03:32.167 17:58:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.167 17:58:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.105 00:03:33.105 real 0m1.480s 00:03:33.105 user 0m0.618s 00:03:33.105 sys 0m0.856s 00:03:33.105 17:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.105 ************************************ 00:03:33.105 END TEST allowed 00:03:33.105 ************************************ 00:03:33.105 17:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.105 00:03:33.105 real 0m4.141s 00:03:33.105 user 0m1.721s 00:03:33.105 sys 0m2.397s 00:03:33.105 17:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.105 17:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.105 ************************************ 00:03:33.105 END TEST acl 00:03:33.105 ************************************ 00:03:33.105 17:58:30 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:33.105 17:58:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.105 17:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.105 17:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.105 ************************************ 00:03:33.105 START TEST hugepages 00:03:33.105 ************************************ 00:03:33.105 17:58:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:33.105 * Looking for test storage... 00:03:33.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:33.105 17:58:30 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.105 17:58:30 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.105 17:58:30 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.105 17:58:30 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.105 17:58:30 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.105 17:58:30 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.105 17:58:30 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.105 17:58:30 -- setup/common.sh@18 -- # local node= 00:03:33.105 17:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.105 17:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.105 17:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.105 17:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.105 17:58:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.105 17:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.105 17:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5459868 kB' 'MemAvailable: 7401220 kB' 'Buffers: 2436 kB' 'Cached: 2150960 kB' 'SwapCached: 0 kB' 'Active: 872128 kB' 'Inactive: 1383724 kB' 'Active(anon): 112944 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 104076 kB' 'Mapped: 48748 kB' 'Shmem: 10488 kB' 'KReclaimable: 70768 kB' 'Slab: 145012 kB' 'SReclaimable: 70768 kB' 'SUnreclaim: 74244 kB' 'KernelStack: 6540 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 334936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.105 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # continue 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 17:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 17:58:30 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.106 17:58:30 -- setup/common.sh@33 -- # echo 2048 00:03:33.106 17:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.106 17:58:30 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.106 17:58:30 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.106 17:58:30 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.106 17:58:30 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:33.106 17:58:30 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:33.106 17:58:30 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:33.106 17:58:30 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:33.106 17:58:30 -- setup/hugepages.sh@207 -- # get_nodes 00:03:33.106 17:58:30 -- setup/hugepages.sh@27 -- # local node 00:03:33.106 17:58:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.106 17:58:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:33.106 17:58:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.106 17:58:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.106 17:58:30 -- setup/hugepages.sh@208 -- # clear_hp 00:03:33.106 17:58:30 -- setup/hugepages.sh@37 -- # local node hp 00:03:33.107 17:58:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.107 17:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.107 17:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.107 17:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.107 17:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.107 17:58:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.107 17:58:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.107 17:58:30 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:33.107 17:58:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.107 17:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.107 17:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.107 ************************************ 00:03:33.107 START TEST default_setup 00:03:33.107 ************************************ 00:03:33.107 17:58:31 -- common/autotest_common.sh@1104 -- # default_setup 00:03:33.107 17:58:31 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:33.107 17:58:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.107 17:58:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.107 17:58:31 -- setup/hugepages.sh@51 -- # shift 00:03:33.107 17:58:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.107 17:58:31 -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.107 17:58:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.107 17:58:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.107 17:58:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.107 17:58:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.107 17:58:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.107 17:58:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.107 17:58:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.107 17:58:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.107 17:58:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.107 17:58:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.107 17:58:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.107 17:58:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.107 17:58:31 -- setup/hugepages.sh@73 -- # return 0 00:03:33.107 17:58:31 -- setup/hugepages.sh@137 -- # setup output 00:03:33.107 17:58:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.107 17:58:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.042 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.042 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.042 17:58:31 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:34.042 17:58:31 -- setup/hugepages.sh@89 -- # local node 00:03:34.042 17:58:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.042 17:58:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.042 17:58:31 -- setup/hugepages.sh@92 -- # local surp 00:03:34.042 17:58:31 -- setup/hugepages.sh@93 -- # local resv 00:03:34.042 17:58:31 -- setup/hugepages.sh@94 -- # local anon 00:03:34.042 17:58:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.042 17:58:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.042 17:58:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.042 17:58:31 -- setup/common.sh@18 -- # local node= 00:03:34.042 17:58:31 -- setup/common.sh@19 -- # local var val 00:03:34.042 17:58:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.042 17:58:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.042 17:58:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.042 17:58:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.042 17:58:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.042 17:58:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.042 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.042 17:58:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547248 kB' 'MemAvailable: 9488380 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888744 kB' 'Inactive: 1383732 kB' 'Active(anon): 129560 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120396 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144628 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74320 kB' 'KernelStack: 6480 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.042 17:58:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.042 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.042 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.042 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.043 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.043 17:58:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.043 17:58:31 -- setup/common.sh@33 -- # echo 0 00:03:34.043 17:58:31 -- setup/common.sh@33 -- # return 0 00:03:34.043 17:58:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.044 17:58:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.044 17:58:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.044 17:58:31 -- setup/common.sh@18 -- # local node= 00:03:34.044 17:58:31 -- setup/common.sh@19 -- # local var val 00:03:34.044 17:58:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.044 17:58:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.044 17:58:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.044 17:58:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.044 17:58:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.044 17:58:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7546996 kB' 'MemAvailable: 9488128 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888448 kB' 'Inactive: 1383732 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120360 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144628 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74320 kB' 'KernelStack: 6464 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.044 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.044 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.045 17:58:31 -- setup/common.sh@33 -- # echo 0 00:03:34.045 17:58:31 -- setup/common.sh@33 -- # return 0 00:03:34.045 17:58:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.045 17:58:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.045 17:58:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.045 17:58:31 -- setup/common.sh@18 -- # local node= 00:03:34.045 17:58:31 -- setup/common.sh@19 -- # local var val 00:03:34.045 17:58:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.045 17:58:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.045 17:58:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.045 17:58:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.045 17:58:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.045 17:58:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7546996 kB' 'MemAvailable: 9488128 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888176 kB' 'Inactive: 1383732 kB' 'Active(anon): 128992 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120136 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144628 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74320 kB' 'KernelStack: 6480 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.045 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.045 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.046 17:58:31 -- setup/common.sh@33 -- # echo 0 00:03:34.046 17:58:31 -- setup/common.sh@33 -- # return 0 00:03:34.046 17:58:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.046 nr_hugepages=1024 00:03:34.046 17:58:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.046 resv_hugepages=0 00:03:34.046 17:58:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.046 surplus_hugepages=0 00:03:34.046 17:58:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.046 anon_hugepages=0 00:03:34.046 17:58:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.046 17:58:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.046 17:58:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.046 17:58:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.046 17:58:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.046 17:58:31 -- setup/common.sh@18 -- # local node= 00:03:34.046 17:58:31 -- setup/common.sh@19 -- # local var val 00:03:34.046 17:58:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.046 17:58:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.047 17:58:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.047 17:58:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.047 17:58:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.047 17:58:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7546996 kB' 'MemAvailable: 9488128 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888008 kB' 'Inactive: 1383732 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120048 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144616 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74308 kB' 'KernelStack: 6448 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.047 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.047 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.335 17:58:31 -- setup/common.sh@33 -- # echo 1024 00:03:34.335 17:58:31 -- setup/common.sh@33 -- # return 0 00:03:34.335 17:58:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.335 17:58:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.335 17:58:31 -- setup/hugepages.sh@27 -- # local node 00:03:34.335 17:58:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.335 17:58:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.335 17:58:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.335 17:58:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.335 17:58:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.335 17:58:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.335 17:58:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.335 17:58:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.335 17:58:31 -- setup/common.sh@18 -- # local node=0 00:03:34.335 17:58:31 -- setup/common.sh@19 -- # local var val 00:03:34.335 17:58:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.335 17:58:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.335 17:58:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.335 17:58:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.335 17:58:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.335 17:58:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7546996 kB' 'MemUsed: 4694980 kB' 'SwapCached: 0 kB' 'Active: 888088 kB' 'Inactive: 1383732 kB' 'Active(anon): 128904 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2153388 kB' 'Mapped: 48576 kB' 'AnonPages: 120092 kB' 'Shmem: 10464 kB' 'KernelStack: 6448 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144612 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.335 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.335 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.336 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.336 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.336 17:58:32 -- setup/common.sh@33 -- # echo 0 00:03:34.336 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.336 17:58:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.336 17:58:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.336 17:58:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.336 17:58:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.336 node0=1024 expecting 1024 00:03:34.336 17:58:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.336 17:58:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.336 00:03:34.336 real 0m1.011s 00:03:34.336 user 0m0.472s 00:03:34.336 sys 0m0.510s 00:03:34.336 17:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.336 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.336 ************************************ 00:03:34.336 END TEST default_setup 00:03:34.336 ************************************ 00:03:34.336 17:58:32 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:34.336 17:58:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.336 17:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.336 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.336 ************************************ 00:03:34.336 START TEST per_node_1G_alloc 00:03:34.336 ************************************ 00:03:34.336 17:58:32 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:34.336 17:58:32 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:34.336 17:58:32 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:34.336 17:58:32 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.336 17:58:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.336 17:58:32 -- setup/hugepages.sh@51 -- # shift 00:03:34.336 17:58:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.336 17:58:32 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.336 17:58:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.336 17:58:32 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.336 17:58:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.336 17:58:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.337 17:58:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.337 17:58:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.337 17:58:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.337 17:58:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.337 17:58:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.337 17:58:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.337 17:58:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.337 17:58:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.337 17:58:32 -- setup/hugepages.sh@73 -- # return 0 00:03:34.337 17:58:32 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:34.337 17:58:32 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:34.337 17:58:32 -- setup/hugepages.sh@146 -- # setup output 00:03:34.337 17:58:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.337 17:58:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.601 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.601 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.601 17:58:32 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:34.601 17:58:32 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.601 17:58:32 -- setup/hugepages.sh@89 -- # local node 00:03:34.601 17:58:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.601 17:58:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.601 17:58:32 -- setup/hugepages.sh@92 -- # local surp 00:03:34.601 17:58:32 -- setup/hugepages.sh@93 -- # local resv 00:03:34.601 17:58:32 -- setup/hugepages.sh@94 -- # local anon 00:03:34.601 17:58:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.601 17:58:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.601 17:58:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.601 17:58:32 -- setup/common.sh@18 -- # local node= 00:03:34.601 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:34.601 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.601 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.601 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.601 17:58:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.601 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.601 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8595820 kB' 'MemAvailable: 10536960 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 889104 kB' 'Inactive: 1383740 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120840 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144636 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74328 kB' 'KernelStack: 6456 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.601 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.601 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.602 17:58:32 -- setup/common.sh@33 -- # echo 0 00:03:34.602 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.602 17:58:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.602 17:58:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.602 17:58:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.602 17:58:32 -- setup/common.sh@18 -- # local node= 00:03:34.602 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:34.602 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.602 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.602 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.602 17:58:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.602 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.602 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8596328 kB' 'MemAvailable: 10537468 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888648 kB' 'Inactive: 1383740 kB' 'Active(anon): 129464 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120352 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144632 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74324 kB' 'KernelStack: 6488 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.602 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.602 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.603 17:58:32 -- setup/common.sh@33 -- # echo 0 00:03:34.603 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.603 17:58:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.603 17:58:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.603 17:58:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.603 17:58:32 -- setup/common.sh@18 -- # local node= 00:03:34.603 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:34.603 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.603 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.603 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.603 17:58:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.603 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.603 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8597512 kB' 'MemAvailable: 10538652 kB' 'Buffers: 2436 kB' 'Cached: 2150952 kB' 'SwapCached: 0 kB' 'Active: 888676 kB' 'Inactive: 1383740 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120416 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144616 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74308 kB' 'KernelStack: 6496 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.603 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.603 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.604 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.604 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.604 17:58:32 -- setup/common.sh@33 -- # echo 0 00:03:34.604 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.604 17:58:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.604 nr_hugepages=512 00:03:34.604 17:58:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:34.604 resv_hugepages=0 00:03:34.604 17:58:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.604 surplus_hugepages=0 00:03:34.604 17:58:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.604 anon_hugepages=0 00:03:34.604 17:58:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.604 17:58:32 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.604 17:58:32 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:34.604 17:58:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.604 17:58:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.604 17:58:32 -- setup/common.sh@18 -- # local node= 00:03:34.604 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:34.604 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.604 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.604 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.604 17:58:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.863 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.863 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8597512 kB' 'MemAvailable: 10538656 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888340 kB' 'Inactive: 1383744 kB' 'Active(anon): 129156 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120300 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144612 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74304 kB' 'KernelStack: 6448 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.863 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.863 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.864 17:58:32 -- setup/common.sh@33 -- # echo 512 00:03:34.864 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.864 17:58:32 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.864 17:58:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.864 17:58:32 -- setup/hugepages.sh@27 -- # local node 00:03:34.864 17:58:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.864 17:58:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.864 17:58:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.864 17:58:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.864 17:58:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.864 17:58:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.864 17:58:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.864 17:58:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.864 17:58:32 -- setup/common.sh@18 -- # local node=0 00:03:34.864 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:34.864 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.864 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.864 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.864 17:58:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.864 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.864 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8597260 kB' 'MemUsed: 3644716 kB' 'SwapCached: 0 kB' 'Active: 888380 kB' 'Inactive: 1383744 kB' 'Active(anon): 129196 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120336 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144612 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.864 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.864 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # continue 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.865 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.865 17:58:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.865 17:58:32 -- setup/common.sh@33 -- # echo 0 00:03:34.865 17:58:32 -- setup/common.sh@33 -- # return 0 00:03:34.865 17:58:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.865 17:58:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.865 17:58:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.865 node0=512 expecting 512 00:03:34.865 17:58:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.865 17:58:32 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.865 00:03:34.865 real 0m0.527s 00:03:34.865 user 0m0.257s 00:03:34.865 sys 0m0.306s 00:03:34.865 17:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.865 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.865 ************************************ 00:03:34.865 END TEST per_node_1G_alloc 00:03:34.865 ************************************ 00:03:34.865 17:58:32 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.865 17:58:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.865 17:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.865 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.865 ************************************ 00:03:34.865 START TEST even_2G_alloc 00:03:34.865 ************************************ 00:03:34.865 17:58:32 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:34.865 17:58:32 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.865 17:58:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.865 17:58:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.865 17:58:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.865 17:58:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.865 17:58:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.865 17:58:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.865 17:58:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.865 17:58:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.865 17:58:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.865 17:58:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:34.865 17:58:32 -- setup/hugepages.sh@83 -- # : 0 00:03:34.865 17:58:32 -- setup/hugepages.sh@84 -- # : 0 00:03:34.865 17:58:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.865 17:58:32 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.865 17:58:32 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.865 17:58:32 -- setup/hugepages.sh@153 -- # setup output 00:03:34.865 17:58:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.865 17:58:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.125 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.125 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.125 17:58:32 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.125 17:58:32 -- setup/hugepages.sh@89 -- # local node 00:03:35.125 17:58:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.125 17:58:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.125 17:58:32 -- setup/hugepages.sh@92 -- # local surp 00:03:35.125 17:58:32 -- setup/hugepages.sh@93 -- # local resv 00:03:35.125 17:58:32 -- setup/hugepages.sh@94 -- # local anon 00:03:35.125 17:58:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.125 17:58:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.125 17:58:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.125 17:58:32 -- setup/common.sh@18 -- # local node= 00:03:35.125 17:58:32 -- setup/common.sh@19 -- # local var val 00:03:35.125 17:58:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.125 17:58:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.125 17:58:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.125 17:58:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.125 17:58:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.125 17:58:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550268 kB' 'MemAvailable: 9491412 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888652 kB' 'Inactive: 1383744 kB' 'Active(anon): 129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144644 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74336 kB' 'KernelStack: 6532 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:32 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.125 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.125 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.126 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.126 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.126 17:58:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.126 17:58:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.126 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.126 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.126 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.126 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.126 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.126 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.126 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.126 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.126 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550348 kB' 'MemAvailable: 9491492 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888372 kB' 'Inactive: 1383744 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120552 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144648 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74340 kB' 'KernelStack: 6464 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.126 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.126 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.126 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.126 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.126 17:58:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.385 17:58:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.385 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.385 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.385 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.385 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.385 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.385 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.385 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.385 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.385 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550348 kB' 'MemAvailable: 9491492 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888168 kB' 'Inactive: 1383744 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120408 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144648 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74340 kB' 'KernelStack: 6480 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.385 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.386 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.386 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.386 17:58:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.386 nr_hugepages=1024 00:03:35.386 17:58:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.386 resv_hugepages=0 00:03:35.386 17:58:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.386 surplus_hugepages=0 00:03:35.386 17:58:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.386 anon_hugepages=0 00:03:35.386 17:58:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.386 17:58:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.386 17:58:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.386 17:58:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.386 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.386 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.386 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.386 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.386 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.386 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.386 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.386 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.386 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550348 kB' 'MemAvailable: 9491492 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888132 kB' 'Inactive: 1383744 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120340 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144648 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74340 kB' 'KernelStack: 6448 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.386 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.387 17:58:33 -- setup/common.sh@33 -- # echo 1024 00:03:35.387 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.387 17:58:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.387 17:58:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.387 17:58:33 -- setup/hugepages.sh@27 -- # local node 00:03:35.387 17:58:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.387 17:58:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.387 17:58:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.387 17:58:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.387 17:58:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.387 17:58:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.387 17:58:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.387 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.387 17:58:33 -- setup/common.sh@18 -- # local node=0 00:03:35.387 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.387 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.387 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.387 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.387 17:58:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.387 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.387 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550348 kB' 'MemUsed: 4691628 kB' 'SwapCached: 0 kB' 'Active: 888120 kB' 'Inactive: 1383744 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120372 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144648 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.387 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.387 17:58:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.387 17:58:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.387 17:58:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.387 17:58:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.388 node0=1024 expecting 1024 00:03:35.388 17:58:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.388 17:58:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.388 00:03:35.388 real 0m0.499s 00:03:35.388 user 0m0.239s 00:03:35.388 sys 0m0.295s 00:03:35.388 17:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.388 ************************************ 00:03:35.388 17:58:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.388 END TEST even_2G_alloc 00:03:35.388 ************************************ 00:03:35.388 17:58:33 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:35.388 17:58:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.388 17:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.388 17:58:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.388 ************************************ 00:03:35.388 START TEST odd_alloc 00:03:35.388 ************************************ 00:03:35.388 17:58:33 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:35.388 17:58:33 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:35.388 17:58:33 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:35.388 17:58:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:35.388 17:58:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.388 17:58:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.388 17:58:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.388 17:58:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:35.388 17:58:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.388 17:58:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.388 17:58:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.388 17:58:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:35.388 17:58:33 -- setup/hugepages.sh@83 -- # : 0 00:03:35.388 17:58:33 -- setup/hugepages.sh@84 -- # : 0 00:03:35.388 17:58:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.388 17:58:33 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:35.388 17:58:33 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:35.388 17:58:33 -- setup/hugepages.sh@160 -- # setup output 00:03:35.388 17:58:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.388 17:58:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.646 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.646 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.646 17:58:33 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:35.646 17:58:33 -- setup/hugepages.sh@89 -- # local node 00:03:35.646 17:58:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.646 17:58:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.646 17:58:33 -- setup/hugepages.sh@92 -- # local surp 00:03:35.646 17:58:33 -- setup/hugepages.sh@93 -- # local resv 00:03:35.646 17:58:33 -- setup/hugepages.sh@94 -- # local anon 00:03:35.646 17:58:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.646 17:58:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.646 17:58:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.646 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.646 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.646 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.646 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.646 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.646 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.646 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.646 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.646 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547512 kB' 'MemAvailable: 9488656 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888852 kB' 'Inactive: 1383744 kB' 'Active(anon): 129668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121076 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144608 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74300 kB' 'KernelStack: 6664 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.646 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.646 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.647 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.647 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.907 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.907 17:58:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.907 17:58:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.907 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.907 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.907 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.907 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.907 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.907 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.907 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.907 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.907 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547512 kB' 'MemAvailable: 9488656 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888372 kB' 'Inactive: 1383744 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120260 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144640 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74332 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.908 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.908 17:58:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.908 17:58:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.908 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.908 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.908 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.908 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.908 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.908 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.908 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.908 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.908 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547976 kB' 'MemAvailable: 9489120 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888336 kB' 'Inactive: 1383744 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120520 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144640 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74332 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.909 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.909 17:58:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.909 nr_hugepages=1025 00:03:35.909 17:58:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:35.909 resv_hugepages=0 00:03:35.909 17:58:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.909 surplus_hugepages=0 00:03:35.909 17:58:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.909 anon_hugepages=0 00:03:35.909 17:58:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.909 17:58:33 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.909 17:58:33 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:35.909 17:58:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.909 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.909 17:58:33 -- setup/common.sh@18 -- # local node= 00:03:35.909 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.909 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.909 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.909 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.909 17:58:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.909 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.909 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547976 kB' 'MemAvailable: 9489120 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888168 kB' 'Inactive: 1383744 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144640 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74332 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.909 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.909 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 17:58:33 -- setup/common.sh@33 -- # echo 1025 00:03:35.910 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.910 17:58:33 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.910 17:58:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.910 17:58:33 -- setup/hugepages.sh@27 -- # local node 00:03:35.910 17:58:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.910 17:58:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:35.910 17:58:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.910 17:58:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.910 17:58:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.910 17:58:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.910 17:58:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.910 17:58:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.910 17:58:33 -- setup/common.sh@18 -- # local node=0 00:03:35.910 17:58:33 -- setup/common.sh@19 -- # local var val 00:03:35.910 17:58:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.910 17:58:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.910 17:58:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.910 17:58:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.910 17:58:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.910 17:58:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547724 kB' 'MemUsed: 4694252 kB' 'SwapCached: 0 kB' 'Active: 888164 kB' 'Inactive: 1383744 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120348 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144648 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.910 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.910 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # continue 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 17:58:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 17:58:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.911 17:58:33 -- setup/common.sh@33 -- # echo 0 00:03:35.911 17:58:33 -- setup/common.sh@33 -- # return 0 00:03:35.911 17:58:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.911 17:58:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.911 17:58:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.911 node0=1025 expecting 1025 00:03:35.911 17:58:33 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:35.911 17:58:33 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:35.911 00:03:35.911 real 0m0.528s 00:03:35.911 user 0m0.272s 00:03:35.911 sys 0m0.286s 00:03:35.911 17:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.911 17:58:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.911 ************************************ 00:03:35.911 END TEST odd_alloc 00:03:35.911 ************************************ 00:03:35.911 17:58:33 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:35.911 17:58:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.911 17:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.911 17:58:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.911 ************************************ 00:03:35.911 START TEST custom_alloc 00:03:35.911 ************************************ 00:03:35.911 17:58:33 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:35.911 17:58:33 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:35.911 17:58:33 -- setup/hugepages.sh@169 -- # local node 00:03:35.911 17:58:33 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:35.911 17:58:33 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:35.911 17:58:33 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:35.911 17:58:33 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:35.911 17:58:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.911 17:58:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.911 17:58:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.911 17:58:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.911 17:58:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.911 17:58:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.911 17:58:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.911 17:58:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@83 -- # : 0 00:03:35.911 17:58:33 -- setup/hugepages.sh@84 -- # : 0 00:03:35.911 17:58:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:35.911 17:58:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:35.911 17:58:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:35.911 17:58:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.911 17:58:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.911 17:58:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.911 17:58:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.911 17:58:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.911 17:58:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:35.911 17:58:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.911 17:58:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:35.911 17:58:33 -- setup/hugepages.sh@78 -- # return 0 00:03:35.911 17:58:33 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:35.911 17:58:33 -- setup/hugepages.sh@187 -- # setup output 00:03:35.911 17:58:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.911 17:58:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.430 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.430 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.430 17:58:34 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:36.430 17:58:34 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:36.430 17:58:34 -- setup/hugepages.sh@89 -- # local node 00:03:36.430 17:58:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.430 17:58:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.430 17:58:34 -- setup/hugepages.sh@92 -- # local surp 00:03:36.430 17:58:34 -- setup/hugepages.sh@93 -- # local resv 00:03:36.430 17:58:34 -- setup/hugepages.sh@94 -- # local anon 00:03:36.430 17:58:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.430 17:58:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.430 17:58:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.430 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.430 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.430 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.430 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.430 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.430 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.430 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.430 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599192 kB' 'MemAvailable: 10540336 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888876 kB' 'Inactive: 1383744 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120860 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144628 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74320 kB' 'KernelStack: 6456 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.430 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.430 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.431 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.431 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.431 17:58:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.431 17:58:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.431 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.431 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.431 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.431 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.431 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.431 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.431 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.431 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.431 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.431 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599192 kB' 'MemAvailable: 10540336 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888568 kB' 'Inactive: 1383744 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120516 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144644 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74336 kB' 'KernelStack: 6440 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.431 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.431 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.432 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.432 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.433 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.433 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.433 17:58:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.433 17:58:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.433 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.433 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.433 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.433 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.433 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.433 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.433 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.433 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.433 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599192 kB' 'MemAvailable: 10540336 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888292 kB' 'Inactive: 1383744 kB' 'Active(anon): 129108 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120240 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144672 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74364 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.433 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.433 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.434 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.434 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.434 17:58:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.434 nr_hugepages=512 00:03:36.434 17:58:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:36.434 resv_hugepages=0 00:03:36.434 17:58:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.434 surplus_hugepages=0 00:03:36.434 17:58:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.434 anon_hugepages=0 00:03:36.434 17:58:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.434 17:58:34 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:36.434 17:58:34 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:36.434 17:58:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.434 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.434 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.434 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.434 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.434 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.434 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.434 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.434 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.434 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599192 kB' 'MemAvailable: 10540336 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888520 kB' 'Inactive: 1383744 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120468 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144672 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74364 kB' 'KernelStack: 6448 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.434 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.434 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.435 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.435 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.436 17:58:34 -- setup/common.sh@33 -- # echo 512 00:03:36.436 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.436 17:58:34 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:36.436 17:58:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.436 17:58:34 -- setup/hugepages.sh@27 -- # local node 00:03:36.436 17:58:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.436 17:58:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.436 17:58:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.436 17:58:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.436 17:58:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.436 17:58:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.436 17:58:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.436 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.436 17:58:34 -- setup/common.sh@18 -- # local node=0 00:03:36.436 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.436 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.436 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.436 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.436 17:58:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.436 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.436 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599192 kB' 'MemUsed: 3642784 kB' 'SwapCached: 0 kB' 'Active: 888412 kB' 'Inactive: 1383744 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120360 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144668 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.437 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.437 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.437 17:58:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.437 17:58:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.437 17:58:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.437 17:58:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.437 node0=512 expecting 512 00:03:36.437 17:58:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.437 17:58:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.437 00:03:36.437 real 0m0.515s 00:03:36.437 user 0m0.243s 00:03:36.437 sys 0m0.297s 00:03:36.437 17:58:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.437 17:58:34 -- common/autotest_common.sh@10 -- # set +x 00:03:36.437 ************************************ 00:03:36.437 END TEST custom_alloc 00:03:36.437 ************************************ 00:03:36.437 17:58:34 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.437 17:58:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:36.437 17:58:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:36.437 17:58:34 -- common/autotest_common.sh@10 -- # set +x 00:03:36.437 ************************************ 00:03:36.437 START TEST no_shrink_alloc 00:03:36.437 ************************************ 00:03:36.437 17:58:34 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:36.437 17:58:34 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.437 17:58:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.437 17:58:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.437 17:58:34 -- setup/hugepages.sh@51 -- # shift 00:03:36.437 17:58:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.437 17:58:34 -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.437 17:58:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.437 17:58:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.437 17:58:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.437 17:58:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.437 17:58:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.437 17:58:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.437 17:58:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.437 17:58:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.437 17:58:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.437 17:58:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.437 17:58:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.437 17:58:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.437 17:58:34 -- setup/hugepages.sh@73 -- # return 0 00:03:36.437 17:58:34 -- setup/hugepages.sh@198 -- # setup output 00:03:36.437 17:58:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.437 17:58:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.956 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.956 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.956 17:58:34 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.956 17:58:34 -- setup/hugepages.sh@89 -- # local node 00:03:36.956 17:58:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.956 17:58:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.956 17:58:34 -- setup/hugepages.sh@92 -- # local surp 00:03:36.956 17:58:34 -- setup/hugepages.sh@93 -- # local resv 00:03:36.956 17:58:34 -- setup/hugepages.sh@94 -- # local anon 00:03:36.956 17:58:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.956 17:58:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.956 17:58:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.956 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.956 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.956 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.956 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.956 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.956 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.956 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.956 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551736 kB' 'MemAvailable: 9492880 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888568 kB' 'Inactive: 1383744 kB' 'Active(anon): 129384 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120552 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144668 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74360 kB' 'KernelStack: 6504 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.956 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.956 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.957 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.957 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.957 17:58:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.957 17:58:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.957 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.957 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.957 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.957 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.957 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.957 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.957 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.957 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.957 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.957 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551736 kB' 'MemAvailable: 9492880 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888424 kB' 'Inactive: 1383744 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120372 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144688 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74380 kB' 'KernelStack: 6480 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.957 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.957 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.958 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.958 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.959 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.959 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.959 17:58:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.959 17:58:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.959 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.959 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.959 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.959 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.959 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.959 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.959 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.959 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.959 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551736 kB' 'MemAvailable: 9492880 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888424 kB' 'Inactive: 1383744 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120372 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144688 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74380 kB' 'KernelStack: 6480 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.959 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.959 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.960 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.960 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.960 nr_hugepages=1024 00:03:36.960 resv_hugepages=0 00:03:36.960 surplus_hugepages=0 00:03:36.960 anon_hugepages=0 00:03:36.960 17:58:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.960 17:58:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.960 17:58:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.960 17:58:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.960 17:58:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.960 17:58:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.960 17:58:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.960 17:58:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.960 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.960 17:58:34 -- setup/common.sh@18 -- # local node= 00:03:36.960 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.960 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.960 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.960 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.960 17:58:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.960 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.960 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551736 kB' 'MemAvailable: 9492880 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888508 kB' 'Inactive: 1383744 kB' 'Active(anon): 129324 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120432 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144676 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74368 kB' 'KernelStack: 6448 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.960 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.960 17:58:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.961 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.961 17:58:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.962 17:58:34 -- setup/common.sh@33 -- # echo 1024 00:03:36.962 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.962 17:58:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.962 17:58:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.962 17:58:34 -- setup/hugepages.sh@27 -- # local node 00:03:36.962 17:58:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.962 17:58:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.962 17:58:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.962 17:58:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.962 17:58:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.962 17:58:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.962 17:58:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.962 17:58:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.962 17:58:34 -- setup/common.sh@18 -- # local node=0 00:03:36.962 17:58:34 -- setup/common.sh@19 -- # local var val 00:03:36.962 17:58:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.962 17:58:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.962 17:58:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.962 17:58:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.962 17:58:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.962 17:58:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551736 kB' 'MemUsed: 4690240 kB' 'SwapCached: 0 kB' 'Active: 888392 kB' 'Inactive: 1383744 kB' 'Active(anon): 129208 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120312 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144676 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.962 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.962 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # continue 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 17:58:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 17:58:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 17:58:34 -- setup/common.sh@33 -- # echo 0 00:03:36.963 17:58:34 -- setup/common.sh@33 -- # return 0 00:03:36.963 17:58:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.963 17:58:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.963 17:58:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.963 17:58:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.963 node0=1024 expecting 1024 00:03:36.963 17:58:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.963 17:58:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.963 17:58:34 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.963 17:58:34 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.963 17:58:34 -- setup/hugepages.sh@202 -- # setup output 00:03:36.963 17:58:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.963 17:58:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.221 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.221 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.221 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:37.221 17:58:35 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:37.221 17:58:35 -- setup/hugepages.sh@89 -- # local node 00:03:37.221 17:58:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.221 17:58:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.221 17:58:35 -- setup/hugepages.sh@92 -- # local surp 00:03:37.221 17:58:35 -- setup/hugepages.sh@93 -- # local resv 00:03:37.221 17:58:35 -- setup/hugepages.sh@94 -- # local anon 00:03:37.221 17:58:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.221 17:58:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.221 17:58:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.221 17:58:35 -- setup/common.sh@18 -- # local node= 00:03:37.221 17:58:35 -- setup/common.sh@19 -- # local var val 00:03:37.221 17:58:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.221 17:58:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.221 17:58:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.221 17:58:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.221 17:58:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.221 17:58:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.221 17:58:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551484 kB' 'MemAvailable: 9492628 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888864 kB' 'Inactive: 1383744 kB' 'Active(anon): 129680 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144696 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74388 kB' 'KernelStack: 6488 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.221 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.221 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.482 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.482 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.483 17:58:35 -- setup/common.sh@33 -- # echo 0 00:03:37.483 17:58:35 -- setup/common.sh@33 -- # return 0 00:03:37.483 17:58:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.483 17:58:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.483 17:58:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.483 17:58:35 -- setup/common.sh@18 -- # local node= 00:03:37.483 17:58:35 -- setup/common.sh@19 -- # local var val 00:03:37.483 17:58:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.483 17:58:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.483 17:58:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.483 17:58:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.483 17:58:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.483 17:58:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551232 kB' 'MemAvailable: 9492376 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888684 kB' 'Inactive: 1383744 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144696 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74388 kB' 'KernelStack: 6448 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.483 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.483 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.484 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.484 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.485 17:58:35 -- setup/common.sh@33 -- # echo 0 00:03:37.485 17:58:35 -- setup/common.sh@33 -- # return 0 00:03:37.485 17:58:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.485 17:58:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.485 17:58:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.485 17:58:35 -- setup/common.sh@18 -- # local node= 00:03:37.485 17:58:35 -- setup/common.sh@19 -- # local var val 00:03:37.485 17:58:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.485 17:58:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.485 17:58:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.485 17:58:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.485 17:58:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.485 17:58:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550980 kB' 'MemAvailable: 9492124 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888500 kB' 'Inactive: 1383744 kB' 'Active(anon): 129316 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120428 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144696 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74388 kB' 'KernelStack: 6464 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.485 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.485 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.486 17:58:35 -- setup/common.sh@33 -- # echo 0 00:03:37.486 17:58:35 -- setup/common.sh@33 -- # return 0 00:03:37.486 17:58:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.486 nr_hugepages=1024 00:03:37.486 17:58:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.486 resv_hugepages=0 00:03:37.486 17:58:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.486 surplus_hugepages=0 00:03:37.486 17:58:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.486 anon_hugepages=0 00:03:37.486 17:58:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.486 17:58:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.486 17:58:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.486 17:58:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.486 17:58:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.486 17:58:35 -- setup/common.sh@18 -- # local node= 00:03:37.486 17:58:35 -- setup/common.sh@19 -- # local var val 00:03:37.486 17:58:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.486 17:58:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.486 17:58:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.486 17:58:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.486 17:58:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.486 17:58:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550980 kB' 'MemAvailable: 9492124 kB' 'Buffers: 2436 kB' 'Cached: 2150956 kB' 'SwapCached: 0 kB' 'Active: 888256 kB' 'Inactive: 1383744 kB' 'Active(anon): 129072 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120224 kB' 'Mapped: 48576 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 144688 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74380 kB' 'KernelStack: 6480 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.486 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.486 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.487 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.487 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.488 17:58:35 -- setup/common.sh@33 -- # echo 1024 00:03:37.488 17:58:35 -- setup/common.sh@33 -- # return 0 00:03:37.488 17:58:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.488 17:58:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.488 17:58:35 -- setup/hugepages.sh@27 -- # local node 00:03:37.488 17:58:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.488 17:58:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.488 17:58:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.488 17:58:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.488 17:58:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.488 17:58:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.488 17:58:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.488 17:58:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.488 17:58:35 -- setup/common.sh@18 -- # local node=0 00:03:37.488 17:58:35 -- setup/common.sh@19 -- # local var val 00:03:37.488 17:58:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.488 17:58:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.488 17:58:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.488 17:58:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.488 17:58:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.488 17:58:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550980 kB' 'MemUsed: 4690996 kB' 'SwapCached: 0 kB' 'Active: 888524 kB' 'Inactive: 1383744 kB' 'Active(anon): 129340 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1383744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2153392 kB' 'Mapped: 48576 kB' 'AnonPages: 120492 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 144688 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 74380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.488 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.488 17:58:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # continue 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.489 17:58:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.489 17:58:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.489 17:58:35 -- setup/common.sh@33 -- # echo 0 00:03:37.489 17:58:35 -- setup/common.sh@33 -- # return 0 00:03:37.489 17:58:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.489 17:58:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.489 17:58:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.489 17:58:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.489 node0=1024 expecting 1024 00:03:37.489 17:58:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.489 17:58:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.489 00:03:37.489 real 0m1.003s 00:03:37.489 user 0m0.491s 00:03:37.489 sys 0m0.555s 00:03:37.489 17:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.489 17:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:37.489 ************************************ 00:03:37.489 END TEST no_shrink_alloc 00:03:37.489 ************************************ 00:03:37.489 17:58:35 -- setup/hugepages.sh@217 -- # clear_hp 00:03:37.489 17:58:35 -- setup/hugepages.sh@37 -- # local node hp 00:03:37.489 17:58:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.489 17:58:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.489 17:58:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.489 17:58:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.489 17:58:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:37.489 17:58:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:37.489 17:58:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:37.489 00:03:37.489 real 0m4.505s 00:03:37.489 user 0m2.121s 00:03:37.489 sys 0m2.510s 00:03:37.489 17:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.489 17:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:37.489 ************************************ 00:03:37.489 END TEST hugepages 00:03:37.489 ************************************ 00:03:37.489 17:58:35 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.489 17:58:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.489 17:58:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.489 17:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:37.748 ************************************ 00:03:37.748 START TEST driver 00:03:37.748 ************************************ 00:03:37.748 17:58:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.748 * Looking for test storage... 00:03:37.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.748 17:58:35 -- setup/driver.sh@68 -- # setup reset 00:03:37.748 17:58:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.748 17:58:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.319 17:58:36 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:38.319 17:58:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.319 17:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.319 17:58:36 -- common/autotest_common.sh@10 -- # set +x 00:03:38.319 ************************************ 00:03:38.319 START TEST guess_driver 00:03:38.319 ************************************ 00:03:38.319 17:58:36 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:38.319 17:58:36 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:38.319 17:58:36 -- setup/driver.sh@47 -- # local fail=0 00:03:38.319 17:58:36 -- setup/driver.sh@49 -- # pick_driver 00:03:38.319 17:58:36 -- setup/driver.sh@36 -- # vfio 00:03:38.319 17:58:36 -- setup/driver.sh@21 -- # local iommu_grups 00:03:38.319 17:58:36 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:38.319 17:58:36 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:38.319 17:58:36 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:38.319 17:58:36 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:38.319 17:58:36 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:38.319 17:58:36 -- setup/driver.sh@32 -- # return 1 00:03:38.319 17:58:36 -- setup/driver.sh@38 -- # uio 00:03:38.319 17:58:36 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:38.319 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:38.319 17:58:36 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:38.319 17:58:36 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:38.319 Looking for driver=uio_pci_generic 00:03:38.319 17:58:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.319 17:58:36 -- setup/driver.sh@45 -- # setup output config 00:03:38.319 17:58:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.319 17:58:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.885 17:58:36 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:38.885 17:58:36 -- setup/driver.sh@58 -- # continue 00:03:38.885 17:58:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.885 17:58:36 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.885 17:58:36 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:38.885 17:58:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.143 17:58:36 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.143 17:58:36 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:39.143 17:58:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.143 17:58:36 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:39.143 17:58:36 -- setup/driver.sh@65 -- # setup reset 00:03:39.143 17:58:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.143 17:58:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.709 00:03:39.709 real 0m1.396s 00:03:39.709 user 0m0.481s 00:03:39.709 sys 0m0.916s 00:03:39.709 17:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.709 17:58:37 -- common/autotest_common.sh@10 -- # set +x 00:03:39.709 ************************************ 00:03:39.709 END TEST guess_driver 00:03:39.709 ************************************ 00:03:39.709 00:03:39.709 real 0m2.056s 00:03:39.709 user 0m0.648s 00:03:39.709 sys 0m1.446s 00:03:39.709 17:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.709 17:58:37 -- common/autotest_common.sh@10 -- # set +x 00:03:39.709 ************************************ 00:03:39.709 END TEST driver 00:03:39.709 ************************************ 00:03:39.709 17:58:37 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.709 17:58:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:39.709 17:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:39.709 17:58:37 -- common/autotest_common.sh@10 -- # set +x 00:03:39.709 ************************************ 00:03:39.709 START TEST devices 00:03:39.710 ************************************ 00:03:39.710 17:58:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.710 * Looking for test storage... 00:03:39.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.710 17:58:37 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:39.710 17:58:37 -- setup/devices.sh@192 -- # setup reset 00:03:39.710 17:58:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.710 17:58:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.646 17:58:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:40.646 17:58:38 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:40.646 17:58:38 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:40.646 17:58:38 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:40.646 17:58:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:40.646 17:58:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:40.646 17:58:38 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:40.646 17:58:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:40.646 17:58:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:40.646 17:58:38 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:40.646 17:58:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:40.646 17:58:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:03:40.646 17:58:38 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:03:40.646 17:58:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:40.646 17:58:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:03:40.646 17:58:38 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:03:40.646 17:58:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:40.646 17:58:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:40.646 17:58:38 -- setup/devices.sh@196 -- # blocks=() 00:03:40.646 17:58:38 -- setup/devices.sh@196 -- # declare -a blocks 00:03:40.646 17:58:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:40.646 17:58:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:40.646 17:58:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:40.646 17:58:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.646 17:58:38 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:40.646 17:58:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:40.646 17:58:38 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:40.646 17:58:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:40.646 No valid GPT data, bailing 00:03:40.646 17:58:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.646 17:58:38 -- scripts/common.sh@393 -- # pt= 00:03:40.646 17:58:38 -- scripts/common.sh@394 -- # return 1 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.646 17:58:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.646 17:58:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.646 17:58:38 -- setup/common.sh@80 -- # echo 5368709120 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:40.646 17:58:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.646 17:58:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:40.646 17:58:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:40.646 17:58:38 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:40.646 17:58:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:40.646 17:58:38 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:40.646 17:58:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:40.646 No valid GPT data, bailing 00:03:40.646 17:58:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.646 17:58:38 -- scripts/common.sh@393 -- # pt= 00:03:40.646 17:58:38 -- scripts/common.sh@394 -- # return 1 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:40.646 17:58:38 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:40.646 17:58:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:40.646 17:58:38 -- setup/common.sh@80 -- # echo 4294967296 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.646 17:58:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.646 17:58:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:40.646 17:58:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:40.646 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:40.646 17:58:38 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:40.646 17:58:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:40.646 17:58:38 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:40.646 17:58:38 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:40.646 17:58:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:40.905 No valid GPT data, bailing 00:03:40.905 17:58:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:40.905 17:58:38 -- scripts/common.sh@393 -- # pt= 00:03:40.905 17:58:38 -- scripts/common.sh@394 -- # return 1 00:03:40.905 17:58:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:40.905 17:58:38 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:40.905 17:58:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:40.905 17:58:38 -- setup/common.sh@80 -- # echo 4294967296 00:03:40.905 17:58:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.905 17:58:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.905 17:58:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:40.905 17:58:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.905 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:40.905 17:58:38 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:40.905 17:58:38 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:40.905 17:58:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:40.905 17:58:38 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:40.905 17:58:38 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:40.905 17:58:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:40.905 No valid GPT data, bailing 00:03:40.905 17:58:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:40.905 17:58:38 -- scripts/common.sh@393 -- # pt= 00:03:40.905 17:58:38 -- scripts/common.sh@394 -- # return 1 00:03:40.905 17:58:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:40.905 17:58:38 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:40.905 17:58:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:40.905 17:58:38 -- setup/common.sh@80 -- # echo 4294967296 00:03:40.905 17:58:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.905 17:58:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.905 17:58:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:40.905 17:58:38 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:40.905 17:58:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.905 17:58:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.905 17:58:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.905 17:58:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.905 17:58:38 -- common/autotest_common.sh@10 -- # set +x 00:03:40.905 ************************************ 00:03:40.905 START TEST nvme_mount 00:03:40.905 ************************************ 00:03:40.905 17:58:38 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:40.905 17:58:38 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.905 17:58:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.905 17:58:38 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.905 17:58:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.905 17:58:38 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.905 17:58:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.905 17:58:38 -- setup/common.sh@40 -- # local part_no=1 00:03:40.905 17:58:38 -- setup/common.sh@41 -- # local size=1073741824 00:03:40.905 17:58:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.905 17:58:38 -- setup/common.sh@44 -- # parts=() 00:03:40.905 17:58:38 -- setup/common.sh@44 -- # local parts 00:03:40.905 17:58:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.905 17:58:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.905 17:58:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.905 17:58:38 -- setup/common.sh@46 -- # (( part++ )) 00:03:40.905 17:58:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.905 17:58:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:40.905 17:58:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.905 17:58:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:42.280 Creating new GPT entries in memory. 00:03:42.280 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.280 other utilities. 00:03:42.280 17:58:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.280 17:58:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.280 17:58:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.280 17:58:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.280 17:58:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:43.216 Creating new GPT entries in memory. 00:03:43.216 The operation has completed successfully. 00:03:43.216 17:58:40 -- setup/common.sh@57 -- # (( part++ )) 00:03:43.216 17:58:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.216 17:58:40 -- setup/common.sh@62 -- # wait 53828 00:03:43.216 17:58:40 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.216 17:58:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:43.216 17:58:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.216 17:58:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:43.216 17:58:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:43.216 17:58:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.216 17:58:40 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.216 17:58:40 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:43.216 17:58:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:43.216 17:58:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.216 17:58:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.216 17:58:40 -- setup/devices.sh@53 -- # local found=0 00:03:43.216 17:58:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.216 17:58:40 -- setup/devices.sh@56 -- # : 00:03:43.216 17:58:40 -- setup/devices.sh@59 -- # local pci status 00:03:43.216 17:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.216 17:58:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:43.216 17:58:40 -- setup/devices.sh@47 -- # setup output config 00:03:43.216 17:58:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.216 17:58:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.216 17:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:43.216 17:58:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.216 17:58:41 -- setup/devices.sh@63 -- # found=1 00:03:43.216 17:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.216 17:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:43.216 17:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.785 17:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:43.785 17:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.785 17:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:43.785 17:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.785 17:58:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.785 17:58:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:43.785 17:58:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.785 17:58:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.785 17:58:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.785 17:58:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.785 17:58:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.785 17:58:41 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.785 17:58:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.785 17:58:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.785 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.785 17:58:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.785 17:58:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.043 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:44.043 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:44.043 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.043 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.043 17:58:41 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:44.043 17:58:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:44.043 17:58:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.043 17:58:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:44.043 17:58:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:44.043 17:58:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.043 17:58:41 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.043 17:58:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:44.043 17:58:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:44.043 17:58:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.043 17:58:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.043 17:58:41 -- setup/devices.sh@53 -- # local found=0 00:03:44.043 17:58:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.043 17:58:41 -- setup/devices.sh@56 -- # : 00:03:44.043 17:58:41 -- setup/devices.sh@59 -- # local pci status 00:03:44.043 17:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.043 17:58:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:44.043 17:58:41 -- setup/devices.sh@47 -- # setup output config 00:03:44.043 17:58:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.043 17:58:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.302 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:44.302 17:58:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:44.302 17:58:42 -- setup/devices.sh@63 -- # found=1 00:03:44.302 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.302 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:44.302 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.561 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:44.561 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.819 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:44.819 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.819 17:58:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.819 17:58:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:44.819 17:58:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.819 17:58:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.819 17:58:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.819 17:58:42 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.819 17:58:42 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:44.819 17:58:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:44.819 17:58:42 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:44.820 17:58:42 -- setup/devices.sh@50 -- # local mount_point= 00:03:44.820 17:58:42 -- setup/devices.sh@51 -- # local test_file= 00:03:44.820 17:58:42 -- setup/devices.sh@53 -- # local found=0 00:03:44.820 17:58:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:44.820 17:58:42 -- setup/devices.sh@59 -- # local pci status 00:03:44.820 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.820 17:58:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:44.820 17:58:42 -- setup/devices.sh@47 -- # setup output config 00:03:44.820 17:58:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.820 17:58:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.078 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:45.078 17:58:42 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:45.078 17:58:42 -- setup/devices.sh@63 -- # found=1 00:03:45.078 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.078 17:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:45.078 17:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.341 17:58:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:45.341 17:58:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.603 17:58:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:45.603 17:58:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.603 17:58:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.603 17:58:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:45.603 17:58:43 -- setup/devices.sh@68 -- # return 0 00:03:45.603 17:58:43 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:45.603 17:58:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.603 17:58:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.603 17:58:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.603 17:58:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.603 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.603 00:03:45.603 real 0m4.730s 00:03:45.603 user 0m1.050s 00:03:45.603 sys 0m1.251s 00:03:45.603 17:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.603 17:58:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.603 ************************************ 00:03:45.603 END TEST nvme_mount 00:03:45.603 ************************************ 00:03:45.603 17:58:43 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:45.603 17:58:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.603 17:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.603 17:58:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.603 ************************************ 00:03:45.603 START TEST dm_mount 00:03:45.603 ************************************ 00:03:45.603 17:58:43 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:45.603 17:58:43 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:45.603 17:58:43 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:45.603 17:58:43 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:45.603 17:58:43 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:45.603 17:58:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.603 17:58:43 -- setup/common.sh@40 -- # local part_no=2 00:03:45.603 17:58:43 -- setup/common.sh@41 -- # local size=1073741824 00:03:45.603 17:58:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.603 17:58:43 -- setup/common.sh@44 -- # parts=() 00:03:45.603 17:58:43 -- setup/common.sh@44 -- # local parts 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.603 17:58:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part++ )) 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.603 17:58:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part++ )) 00:03:45.603 17:58:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.603 17:58:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:45.603 17:58:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.603 17:58:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:46.983 Creating new GPT entries in memory. 00:03:46.983 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.983 other utilities. 00:03:46.983 17:58:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.983 17:58:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.983 17:58:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.983 17:58:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.983 17:58:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:47.951 Creating new GPT entries in memory. 00:03:47.951 The operation has completed successfully. 00:03:47.951 17:58:45 -- setup/common.sh@57 -- # (( part++ )) 00:03:47.951 17:58:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.951 17:58:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.951 17:58:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.951 17:58:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:48.887 The operation has completed successfully. 00:03:48.887 17:58:46 -- setup/common.sh@57 -- # (( part++ )) 00:03:48.887 17:58:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.887 17:58:46 -- setup/common.sh@62 -- # wait 54315 00:03:48.887 17:58:46 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.887 17:58:46 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.887 17:58:46 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.887 17:58:46 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.887 17:58:46 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.887 17:58:46 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.887 17:58:46 -- setup/devices.sh@161 -- # break 00:03:48.887 17:58:46 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.887 17:58:46 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.887 17:58:46 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.887 17:58:46 -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.887 17:58:46 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.887 17:58:46 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.887 17:58:46 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.887 17:58:46 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:48.887 17:58:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.887 17:58:46 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.887 17:58:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.887 17:58:46 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.887 17:58:46 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.887 17:58:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:48.887 17:58:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.887 17:58:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.887 17:58:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.887 17:58:46 -- setup/devices.sh@53 -- # local found=0 00:03:48.887 17:58:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.887 17:58:46 -- setup/devices.sh@56 -- # : 00:03:48.887 17:58:46 -- setup/devices.sh@59 -- # local pci status 00:03:48.887 17:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.887 17:58:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:48.887 17:58:46 -- setup/devices.sh@47 -- # setup output config 00:03:48.887 17:58:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.887 17:58:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.146 17:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.146 17:58:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:49.146 17:58:46 -- setup/devices.sh@63 -- # found=1 00:03:49.146 17:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.146 17:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.146 17:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.404 17:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.404 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.404 17:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.404 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.663 17:58:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.663 17:58:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:49.663 17:58:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:49.663 17:58:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.663 17:58:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:49.663 17:58:47 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:49.663 17:58:47 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.663 17:58:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:49.663 17:58:47 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.663 17:58:47 -- setup/devices.sh@50 -- # local mount_point= 00:03:49.663 17:58:47 -- setup/devices.sh@51 -- # local test_file= 00:03:49.663 17:58:47 -- setup/devices.sh@53 -- # local found=0 00:03:49.663 17:58:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.663 17:58:47 -- setup/devices.sh@59 -- # local pci status 00:03:49.663 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.663 17:58:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:49.663 17:58:47 -- setup/devices.sh@47 -- # setup output config 00:03:49.663 17:58:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.663 17:58:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.663 17:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.663 17:58:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:49.663 17:58:47 -- setup/devices.sh@63 -- # found=1 00:03:49.663 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.663 17:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:49.663 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.230 17:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:50.230 17:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.230 17:58:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:50.230 17:58:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.230 17:58:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.230 17:58:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:50.230 17:58:48 -- setup/devices.sh@68 -- # return 0 00:03:50.230 17:58:48 -- setup/devices.sh@187 -- # cleanup_dm 00:03:50.230 17:58:48 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.230 17:58:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.230 17:58:48 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:50.230 17:58:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.230 17:58:48 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:50.230 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.230 17:58:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.230 17:58:48 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:50.489 00:03:50.489 real 0m4.678s 00:03:50.489 user 0m0.733s 00:03:50.489 sys 0m0.881s 00:03:50.489 17:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.489 17:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.489 ************************************ 00:03:50.489 END TEST dm_mount 00:03:50.489 ************************************ 00:03:50.489 17:58:48 -- setup/devices.sh@1 -- # cleanup 00:03:50.489 17:58:48 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:50.489 17:58:48 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.489 17:58:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.489 17:58:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.489 17:58:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.489 17:58:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.747 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.747 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.747 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.747 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.747 17:58:48 -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.747 17:58:48 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.747 17:58:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.747 17:58:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.747 17:58:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.747 17:58:48 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.747 17:58:48 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.747 00:03:50.747 real 0m10.994s 00:03:50.747 user 0m2.470s 00:03:50.747 sys 0m2.750s 00:03:50.747 17:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.747 17:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.747 ************************************ 00:03:50.747 END TEST devices 00:03:50.747 ************************************ 00:03:50.747 00:03:50.747 real 0m21.958s 00:03:50.747 user 0m7.060s 00:03:50.747 sys 0m9.265s 00:03:50.747 17:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.747 17:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.747 ************************************ 00:03:50.747 END TEST setup.sh 00:03:50.747 ************************************ 00:03:50.747 17:58:48 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.006 Hugepages 00:03:51.006 node hugesize free / total 00:03:51.006 node0 1048576kB 0 / 0 00:03:51.006 node0 2048kB 2048 / 2048 00:03:51.006 00:03:51.006 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.006 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.006 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:51.265 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:51.265 17:58:48 -- spdk/autotest.sh@141 -- # uname -s 00:03:51.265 17:58:48 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:51.265 17:58:48 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:51.265 17:58:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.091 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.091 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.091 17:58:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:53.026 17:58:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:53.026 17:58:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:53.026 17:58:50 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.026 17:58:50 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:53.026 17:58:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.026 17:58:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.026 17:58:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.026 17:58:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.026 17:58:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.291 17:58:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:53.292 17:58:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:53.292 17:58:50 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.550 Waiting for block devices as requested 00:03:53.550 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.550 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.809 17:58:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:53.809 17:58:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:03:53.809 17:58:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:53.809 17:58:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:53.809 17:58:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1542 -- # continue 00:03:53.809 17:58:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:53.809 17:58:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:03:53.809 17:58:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:53.809 17:58:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:03:53.809 17:58:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:53.809 17:58:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:53.809 17:58:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:53.809 17:58:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:53.809 17:58:51 -- common/autotest_common.sh@1542 -- # continue 00:03:53.809 17:58:51 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:53.809 17:58:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.809 17:58:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.809 17:58:51 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:53.809 17:58:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.809 17:58:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.809 17:58:51 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.635 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.635 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.635 17:58:52 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:54.635 17:58:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:54.635 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:54.635 17:58:52 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:54.635 17:58:52 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:54.635 17:58:52 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.635 17:58:52 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:54.635 17:58:52 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:54.635 17:58:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:54.635 17:58:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:54.635 17:58:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:54.635 17:58:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.635 17:58:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:54.635 17:58:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:54.893 17:58:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:54.893 17:58:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:54.893 17:58:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:54.893 17:58:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:54.893 17:58:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:03:54.893 17:58:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.893 17:58:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:54.893 17:58:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:54.893 17:58:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:03:54.893 17:58:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.893 17:58:52 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:03:54.893 17:58:52 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:54.893 17:58:52 -- common/autotest_common.sh@1578 -- # return 0 00:03:54.893 17:58:52 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:54.893 17:58:52 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:54.893 17:58:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:54.893 17:58:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:54.893 17:58:52 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:54.894 17:58:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:54.894 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:54.894 17:58:52 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.894 17:58:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.894 17:58:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.894 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:54.894 ************************************ 00:03:54.894 START TEST env 00:03:54.894 ************************************ 00:03:54.894 17:58:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.894 * Looking for test storage... 00:03:54.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:54.894 17:58:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.894 17:58:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.894 17:58:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.894 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:54.894 ************************************ 00:03:54.894 START TEST env_memory 00:03:54.894 ************************************ 00:03:54.894 17:58:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.894 00:03:54.894 00:03:54.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.894 http://cunit.sourceforge.net/ 00:03:54.894 00:03:54.894 00:03:54.894 Suite: memory 00:03:54.894 Test: alloc and free memory map ...[2024-04-25 17:58:52.749368] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:54.894 passed 00:03:54.894 Test: mem map translation ...[2024-04-25 17:58:52.780777] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:54.894 [2024-04-25 17:58:52.780816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:54.894 [2024-04-25 17:58:52.780871] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:54.894 [2024-04-25 17:58:52.780883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.153 passed 00:03:55.153 Test: mem map registration ...[2024-04-25 17:58:52.844737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:55.153 [2024-04-25 17:58:52.844786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:55.153 passed 00:03:55.153 Test: mem map adjacent registrations ...passed 00:03:55.153 00:03:55.153 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.153 suites 1 1 n/a 0 0 00:03:55.153 tests 4 4 4 0 0 00:03:55.153 asserts 152 152 152 0 n/a 00:03:55.153 00:03:55.153 Elapsed time = 0.214 seconds 00:03:55.153 00:03:55.153 real 0m0.228s 00:03:55.153 user 0m0.217s 00:03:55.153 sys 0m0.009s 00:03:55.153 17:58:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.153 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:55.153 ************************************ 00:03:55.153 END TEST env_memory 00:03:55.153 ************************************ 00:03:55.153 17:58:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.153 17:58:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.153 17:58:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.153 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:03:55.153 ************************************ 00:03:55.153 START TEST env_vtophys 00:03:55.153 ************************************ 00:03:55.153 17:58:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.153 EAL: lib.eal log level changed from notice to debug 00:03:55.153 EAL: Detected lcore 0 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 1 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 2 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 3 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 4 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 5 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 6 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 7 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 8 as core 0 on socket 0 00:03:55.153 EAL: Detected lcore 9 as core 0 on socket 0 00:03:55.153 EAL: Maximum logical cores by configuration: 128 00:03:55.153 EAL: Detected CPU lcores: 10 00:03:55.153 EAL: Detected NUMA nodes: 1 00:03:55.153 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:55.153 EAL: Detected shared linkage of DPDK 00:03:55.153 EAL: No shared files mode enabled, IPC will be disabled 00:03:55.153 EAL: Selected IOVA mode 'PA' 00:03:55.153 EAL: Probing VFIO support... 00:03:55.153 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.153 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:55.153 EAL: Ask a virtual area of 0x2e000 bytes 00:03:55.153 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:55.153 EAL: Setting up physically contiguous memory... 00:03:55.153 EAL: Setting maximum number of open files to 524288 00:03:55.153 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:55.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:55.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.153 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:55.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.153 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:55.153 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:55.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.153 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:55.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.153 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:55.153 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:55.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.153 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:55.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:55.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:55.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:55.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:55.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:55.153 EAL: Hugepages will be freed exactly as allocated. 00:03:55.153 EAL: No shared files mode enabled, IPC is disabled 00:03:55.153 EAL: No shared files mode enabled, IPC is disabled 00:03:55.411 EAL: TSC frequency is ~2200000 KHz 00:03:55.411 EAL: Main lcore 0 is ready (tid=7f7a35a8ba00;cpuset=[0]) 00:03:55.411 EAL: Trying to obtain current memory policy. 00:03:55.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.411 EAL: Restoring previous memory policy: 0 00:03:55.411 EAL: request: mp_malloc_sync 00:03:55.411 EAL: No shared files mode enabled, IPC is disabled 00:03:55.411 EAL: Heap on socket 0 was expanded by 2MB 00:03:55.411 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.411 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:55.411 EAL: Mem event callback 'spdk:(nil)' registered 00:03:55.411 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:55.411 00:03:55.411 00:03:55.411 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.411 http://cunit.sourceforge.net/ 00:03:55.411 00:03:55.411 00:03:55.411 Suite: components_suite 00:03:55.411 Test: vtophys_malloc_test ...passed 00:03:55.411 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:55.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.411 EAL: Restoring previous memory policy: 4 00:03:55.411 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.411 EAL: request: mp_malloc_sync 00:03:55.411 EAL: No shared files mode enabled, IPC is disabled 00:03:55.411 EAL: Heap on socket 0 was expanded by 4MB 00:03:55.411 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.411 EAL: request: mp_malloc_sync 00:03:55.411 EAL: No shared files mode enabled, IPC is disabled 00:03:55.411 EAL: Heap on socket 0 was shrunk by 4MB 00:03:55.411 EAL: Trying to obtain current memory policy. 00:03:55.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.411 EAL: Restoring previous memory policy: 4 00:03:55.411 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.411 EAL: request: mp_malloc_sync 00:03:55.411 EAL: No shared files mode enabled, IPC is disabled 00:03:55.411 EAL: Heap on socket 0 was expanded by 6MB 00:03:55.411 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 6MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.412 EAL: Restoring previous memory policy: 4 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was expanded by 10MB 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 10MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.412 EAL: Restoring previous memory policy: 4 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was expanded by 18MB 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 18MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.412 EAL: Restoring previous memory policy: 4 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was expanded by 34MB 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 34MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.412 EAL: Restoring previous memory policy: 4 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was expanded by 66MB 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 66MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.412 EAL: Restoring previous memory policy: 4 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was expanded by 130MB 00:03:55.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.412 EAL: request: mp_malloc_sync 00:03:55.412 EAL: No shared files mode enabled, IPC is disabled 00:03:55.412 EAL: Heap on socket 0 was shrunk by 130MB 00:03:55.412 EAL: Trying to obtain current memory policy. 00:03:55.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.670 EAL: Restoring previous memory policy: 4 00:03:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.670 EAL: request: mp_malloc_sync 00:03:55.670 EAL: No shared files mode enabled, IPC is disabled 00:03:55.670 EAL: Heap on socket 0 was expanded by 258MB 00:03:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.670 EAL: request: mp_malloc_sync 00:03:55.670 EAL: No shared files mode enabled, IPC is disabled 00:03:55.670 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.670 EAL: Trying to obtain current memory policy. 00:03:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.670 EAL: Restoring previous memory policy: 4 00:03:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.670 EAL: request: mp_malloc_sync 00:03:55.670 EAL: No shared files mode enabled, IPC is disabled 00:03:55.670 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.928 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.928 EAL: request: mp_malloc_sync 00:03:55.928 EAL: No shared files mode enabled, IPC is disabled 00:03:55.928 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.928 EAL: Trying to obtain current memory policy. 00:03:55.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.195 EAL: Restoring previous memory policy: 4 00:03:56.196 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.196 EAL: request: mp_malloc_sync 00:03:56.196 EAL: No shared files mode enabled, IPC is disabled 00:03:56.196 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.460 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.719 passed 00:03:56.719 00:03:56.719 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.719 suites 1 1 n/a 0 0 00:03:56.719 tests 2 2 2 0 0 00:03:56.719 asserts 5330 5330 5330 0 n/a 00:03:56.719 00:03:56.719 Elapsed time = 1.230 seconds 00:03:56.719 EAL: request: mp_malloc_sync 00:03:56.719 EAL: No shared files mode enabled, IPC is disabled 00:03:56.719 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:56.719 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.719 EAL: request: mp_malloc_sync 00:03:56.719 EAL: No shared files mode enabled, IPC is disabled 00:03:56.719 EAL: Heap on socket 0 was shrunk by 2MB 00:03:56.719 EAL: No shared files mode enabled, IPC is disabled 00:03:56.719 EAL: No shared files mode enabled, IPC is disabled 00:03:56.719 EAL: No shared files mode enabled, IPC is disabled 00:03:56.719 00:03:56.719 real 0m1.429s 00:03:56.719 user 0m0.778s 00:03:56.719 sys 0m0.516s 00:03:56.719 17:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.719 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.719 ************************************ 00:03:56.719 END TEST env_vtophys 00:03:56.719 ************************************ 00:03:56.719 17:58:54 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.719 17:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:56.719 17:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.719 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.719 ************************************ 00:03:56.719 START TEST env_pci 00:03:56.719 ************************************ 00:03:56.719 17:58:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.719 00:03:56.719 00:03:56.719 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.719 http://cunit.sourceforge.net/ 00:03:56.719 00:03:56.719 00:03:56.719 Suite: pci 00:03:56.719 Test: pci_hook ...[2024-04-25 17:58:54.472915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55493 has claimed it 00:03:56.719 passed 00:03:56.719 00:03:56.719 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.719 suites 1 1 n/a 0 0 00:03:56.719 tests 1 1 1 0 0 00:03:56.719 asserts 25 25 25 0 n/a 00:03:56.719 00:03:56.719 Elapsed time = 0.002 seconds 00:03:56.719 EAL: Cannot find device (10000:00:01.0) 00:03:56.719 EAL: Failed to attach device on primary process 00:03:56.719 00:03:56.719 real 0m0.022s 00:03:56.719 user 0m0.007s 00:03:56.719 sys 0m0.015s 00:03:56.719 17:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.719 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.719 ************************************ 00:03:56.719 END TEST env_pci 00:03:56.719 ************************************ 00:03:56.719 17:58:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:56.719 17:58:54 -- env/env.sh@15 -- # uname 00:03:56.719 17:58:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:56.719 17:58:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:56.719 17:58:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.719 17:58:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:03:56.719 17:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.719 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.719 ************************************ 00:03:56.719 START TEST env_dpdk_post_init 00:03:56.719 ************************************ 00:03:56.719 17:58:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.719 EAL: Detected CPU lcores: 10 00:03:56.719 EAL: Detected NUMA nodes: 1 00:03:56.719 EAL: Detected shared linkage of DPDK 00:03:56.719 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.719 EAL: Selected IOVA mode 'PA' 00:03:56.978 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.978 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:56.978 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:56.978 Starting DPDK initialization... 00:03:56.978 Starting SPDK post initialization... 00:03:56.978 SPDK NVMe probe 00:03:56.978 Attaching to 0000:00:06.0 00:03:56.978 Attaching to 0000:00:07.0 00:03:56.978 Attached to 0000:00:06.0 00:03:56.978 Attached to 0000:00:07.0 00:03:56.978 Cleaning up... 00:03:56.978 00:03:56.978 real 0m0.172s 00:03:56.978 user 0m0.040s 00:03:56.978 sys 0m0.032s 00:03:56.978 17:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.978 ************************************ 00:03:56.978 END TEST env_dpdk_post_init 00:03:56.978 ************************************ 00:03:56.978 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.978 17:58:54 -- env/env.sh@26 -- # uname 00:03:56.978 17:58:54 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:56.978 17:58:54 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.978 17:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:56.978 17:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.978 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.978 ************************************ 00:03:56.978 START TEST env_mem_callbacks 00:03:56.978 ************************************ 00:03:56.978 17:58:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.978 EAL: Detected CPU lcores: 10 00:03:56.978 EAL: Detected NUMA nodes: 1 00:03:56.978 EAL: Detected shared linkage of DPDK 00:03:56.978 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.978 EAL: Selected IOVA mode 'PA' 00:03:56.978 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.978 00:03:56.978 00:03:56.978 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.978 http://cunit.sourceforge.net/ 00:03:56.978 00:03:56.978 00:03:56.978 Suite: memory 00:03:56.978 Test: test ... 00:03:56.978 register 0x200000200000 2097152 00:03:56.978 malloc 3145728 00:03:56.978 register 0x200000400000 4194304 00:03:56.978 buf 0x200000500000 len 3145728 PASSED 00:03:56.978 malloc 64 00:03:56.978 buf 0x2000004fff40 len 64 PASSED 00:03:56.978 malloc 4194304 00:03:56.978 register 0x200000800000 6291456 00:03:56.978 buf 0x200000a00000 len 4194304 PASSED 00:03:56.978 free 0x200000500000 3145728 00:03:56.978 free 0x2000004fff40 64 00:03:56.978 unregister 0x200000400000 4194304 PASSED 00:03:56.978 free 0x200000a00000 4194304 00:03:56.978 unregister 0x200000800000 6291456 PASSED 00:03:56.978 malloc 8388608 00:03:56.978 register 0x200000400000 10485760 00:03:56.978 buf 0x200000600000 len 8388608 PASSED 00:03:56.978 free 0x200000600000 8388608 00:03:56.978 unregister 0x200000400000 10485760 PASSED 00:03:56.978 passed 00:03:56.978 00:03:56.978 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.978 suites 1 1 n/a 0 0 00:03:56.978 tests 1 1 1 0 0 00:03:56.978 asserts 15 15 15 0 n/a 00:03:56.978 00:03:56.978 Elapsed time = 0.008 seconds 00:03:56.978 00:03:56.978 real 0m0.139s 00:03:56.978 user 0m0.012s 00:03:56.978 sys 0m0.024s 00:03:56.978 ************************************ 00:03:56.978 END TEST env_mem_callbacks 00:03:56.978 ************************************ 00:03:56.978 17:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.978 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:57.237 ************************************ 00:03:57.237 END TEST env 00:03:57.237 ************************************ 00:03:57.237 00:03:57.237 real 0m2.325s 00:03:57.237 user 0m1.170s 00:03:57.237 sys 0m0.806s 00:03:57.237 17:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.237 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:57.237 17:58:54 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.237 17:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.237 17:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.237 17:58:54 -- common/autotest_common.sh@10 -- # set +x 00:03:57.237 ************************************ 00:03:57.237 START TEST rpc 00:03:57.237 ************************************ 00:03:57.237 17:58:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.237 * Looking for test storage... 00:03:57.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.237 17:58:55 -- rpc/rpc.sh@65 -- # spdk_pid=55596 00:03:57.237 17:58:55 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.237 17:58:55 -- rpc/rpc.sh@67 -- # waitforlisten 55596 00:03:57.237 17:58:55 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:57.237 17:58:55 -- common/autotest_common.sh@819 -- # '[' -z 55596 ']' 00:03:57.237 17:58:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.237 17:58:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:57.237 17:58:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.237 17:58:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:57.237 17:58:55 -- common/autotest_common.sh@10 -- # set +x 00:03:57.237 [2024-04-25 17:58:55.128932] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:57.237 [2024-04-25 17:58:55.129301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55596 ] 00:03:57.495 [2024-04-25 17:58:55.265269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.495 [2024-04-25 17:58:55.399839] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:57.495 [2024-04-25 17:58:55.400053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:57.495 [2024-04-25 17:58:55.400078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55596' to capture a snapshot of events at runtime. 00:03:57.495 [2024-04-25 17:58:55.400090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55596 for offline analysis/debug. 00:03:57.495 [2024-04-25 17:58:55.400131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.431 17:58:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:58.431 17:58:56 -- common/autotest_common.sh@852 -- # return 0 00:03:58.431 17:58:56 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.431 17:58:56 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.431 17:58:56 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.431 17:58:56 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.431 17:58:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.431 17:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 ************************************ 00:03:58.431 START TEST rpc_integrity 00:03:58.431 ************************************ 00:03:58.431 17:58:56 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:58.431 17:58:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.431 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.431 17:58:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.431 17:58:56 -- rpc/rpc.sh@13 -- # jq length 00:03:58.431 17:58:56 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.431 17:58:56 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.431 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.431 17:58:56 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.431 17:58:56 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.431 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.431 17:58:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.431 { 00:03:58.431 "aliases": [ 00:03:58.431 "636fca25-9fd6-44b7-b995-10a4eca51839" 00:03:58.431 ], 00:03:58.431 "assigned_rate_limits": { 00:03:58.431 "r_mbytes_per_sec": 0, 00:03:58.431 "rw_ios_per_sec": 0, 00:03:58.431 "rw_mbytes_per_sec": 0, 00:03:58.431 "w_mbytes_per_sec": 0 00:03:58.431 }, 00:03:58.431 "block_size": 512, 00:03:58.431 "claimed": false, 00:03:58.431 "driver_specific": {}, 00:03:58.431 "memory_domains": [ 00:03:58.431 { 00:03:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.431 "dma_device_type": 2 00:03:58.431 } 00:03:58.431 ], 00:03:58.431 "name": "Malloc0", 00:03:58.431 "num_blocks": 16384, 00:03:58.431 "product_name": "Malloc disk", 00:03:58.431 "supported_io_types": { 00:03:58.431 "abort": true, 00:03:58.431 "compare": false, 00:03:58.431 "compare_and_write": false, 00:03:58.431 "flush": true, 00:03:58.431 "nvme_admin": false, 00:03:58.431 "nvme_io": false, 00:03:58.431 "read": true, 00:03:58.431 "reset": true, 00:03:58.431 "unmap": true, 00:03:58.431 "write": true, 00:03:58.431 "write_zeroes": true 00:03:58.431 }, 00:03:58.431 "uuid": "636fca25-9fd6-44b7-b995-10a4eca51839", 00:03:58.431 "zoned": false 00:03:58.431 } 00:03:58.431 ]' 00:03:58.431 17:58:56 -- rpc/rpc.sh@17 -- # jq length 00:03:58.431 17:58:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.431 17:58:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:58.431 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 [2024-04-25 17:58:56.291572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:58.431 [2024-04-25 17:58:56.291621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.431 [2024-04-25 17:58:56.291640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11673a0 00:03:58.431 [2024-04-25 17:58:56.291650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.431 [2024-04-25 17:58:56.293248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.431 [2024-04-25 17:58:56.293295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.431 Passthru0 00:03:58.431 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.431 17:58:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.431 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.431 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.431 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.431 17:58:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.431 { 00:03:58.431 "aliases": [ 00:03:58.431 "636fca25-9fd6-44b7-b995-10a4eca51839" 00:03:58.431 ], 00:03:58.431 "assigned_rate_limits": { 00:03:58.431 "r_mbytes_per_sec": 0, 00:03:58.431 "rw_ios_per_sec": 0, 00:03:58.431 "rw_mbytes_per_sec": 0, 00:03:58.431 "w_mbytes_per_sec": 0 00:03:58.431 }, 00:03:58.431 "block_size": 512, 00:03:58.431 "claim_type": "exclusive_write", 00:03:58.431 "claimed": true, 00:03:58.431 "driver_specific": {}, 00:03:58.431 "memory_domains": [ 00:03:58.431 { 00:03:58.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.431 "dma_device_type": 2 00:03:58.431 } 00:03:58.431 ], 00:03:58.431 "name": "Malloc0", 00:03:58.431 "num_blocks": 16384, 00:03:58.431 "product_name": "Malloc disk", 00:03:58.431 "supported_io_types": { 00:03:58.431 "abort": true, 00:03:58.431 "compare": false, 00:03:58.431 "compare_and_write": false, 00:03:58.431 "flush": true, 00:03:58.431 "nvme_admin": false, 00:03:58.431 "nvme_io": false, 00:03:58.431 "read": true, 00:03:58.431 "reset": true, 00:03:58.431 "unmap": true, 00:03:58.431 "write": true, 00:03:58.431 "write_zeroes": true 00:03:58.431 }, 00:03:58.431 "uuid": "636fca25-9fd6-44b7-b995-10a4eca51839", 00:03:58.431 "zoned": false 00:03:58.431 }, 00:03:58.431 { 00:03:58.431 "aliases": [ 00:03:58.431 "d58cc8ba-0e0d-5f41-8ac9-15b920545495" 00:03:58.431 ], 00:03:58.431 "assigned_rate_limits": { 00:03:58.431 "r_mbytes_per_sec": 0, 00:03:58.431 "rw_ios_per_sec": 0, 00:03:58.431 "rw_mbytes_per_sec": 0, 00:03:58.431 "w_mbytes_per_sec": 0 00:03:58.431 }, 00:03:58.431 "block_size": 512, 00:03:58.432 "claimed": false, 00:03:58.432 "driver_specific": { 00:03:58.432 "passthru": { 00:03:58.432 "base_bdev_name": "Malloc0", 00:03:58.432 "name": "Passthru0" 00:03:58.432 } 00:03:58.432 }, 00:03:58.432 "memory_domains": [ 00:03:58.432 { 00:03:58.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.432 "dma_device_type": 2 00:03:58.432 } 00:03:58.432 ], 00:03:58.432 "name": "Passthru0", 00:03:58.432 "num_blocks": 16384, 00:03:58.432 "product_name": "passthru", 00:03:58.432 "supported_io_types": { 00:03:58.432 "abort": true, 00:03:58.432 "compare": false, 00:03:58.432 "compare_and_write": false, 00:03:58.432 "flush": true, 00:03:58.432 "nvme_admin": false, 00:03:58.432 "nvme_io": false, 00:03:58.432 "read": true, 00:03:58.432 "reset": true, 00:03:58.432 "unmap": true, 00:03:58.432 "write": true, 00:03:58.432 "write_zeroes": true 00:03:58.432 }, 00:03:58.432 "uuid": "d58cc8ba-0e0d-5f41-8ac9-15b920545495", 00:03:58.432 "zoned": false 00:03:58.432 } 00:03:58.432 ]' 00:03:58.432 17:58:56 -- rpc/rpc.sh@21 -- # jq length 00:03:58.690 17:58:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.691 17:58:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.691 17:58:56 -- rpc/rpc.sh@26 -- # jq length 00:03:58.691 17:58:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.691 00:03:58.691 real 0m0.336s 00:03:58.691 user 0m0.225s 00:03:58.691 sys 0m0.037s 00:03:58.691 ************************************ 00:03:58.691 END TEST rpc_integrity 00:03:58.691 ************************************ 00:03:58.691 17:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.691 17:58:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.691 17:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 ************************************ 00:03:58.691 START TEST rpc_plugins 00:03:58.691 ************************************ 00:03:58.691 17:58:56 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:58.691 17:58:56 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:58.691 17:58:56 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:58.691 { 00:03:58.691 "aliases": [ 00:03:58.691 "421f07aa-a3ff-43e7-9643-ea3562e9eb7a" 00:03:58.691 ], 00:03:58.691 "assigned_rate_limits": { 00:03:58.691 "r_mbytes_per_sec": 0, 00:03:58.691 "rw_ios_per_sec": 0, 00:03:58.691 "rw_mbytes_per_sec": 0, 00:03:58.691 "w_mbytes_per_sec": 0 00:03:58.691 }, 00:03:58.691 "block_size": 4096, 00:03:58.691 "claimed": false, 00:03:58.691 "driver_specific": {}, 00:03:58.691 "memory_domains": [ 00:03:58.691 { 00:03:58.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.691 "dma_device_type": 2 00:03:58.691 } 00:03:58.691 ], 00:03:58.691 "name": "Malloc1", 00:03:58.691 "num_blocks": 256, 00:03:58.691 "product_name": "Malloc disk", 00:03:58.691 "supported_io_types": { 00:03:58.691 "abort": true, 00:03:58.691 "compare": false, 00:03:58.691 "compare_and_write": false, 00:03:58.691 "flush": true, 00:03:58.691 "nvme_admin": false, 00:03:58.691 "nvme_io": false, 00:03:58.691 "read": true, 00:03:58.691 "reset": true, 00:03:58.691 "unmap": true, 00:03:58.691 "write": true, 00:03:58.691 "write_zeroes": true 00:03:58.691 }, 00:03:58.691 "uuid": "421f07aa-a3ff-43e7-9643-ea3562e9eb7a", 00:03:58.691 "zoned": false 00:03:58.691 } 00:03:58.691 ]' 00:03:58.691 17:58:56 -- rpc/rpc.sh@32 -- # jq length 00:03:58.691 17:58:56 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:58.691 17:58:56 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.691 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.691 17:58:56 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:58.691 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.691 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.949 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.949 17:58:56 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:58.949 17:58:56 -- rpc/rpc.sh@36 -- # jq length 00:03:58.949 17:58:56 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:58.949 00:03:58.949 real 0m0.163s 00:03:58.949 user 0m0.107s 00:03:58.949 sys 0m0.017s 00:03:58.949 ************************************ 00:03:58.949 END TEST rpc_plugins 00:03:58.949 ************************************ 00:03:58.949 17:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.949 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.949 17:58:56 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:58.949 17:58:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.949 17:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.949 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.949 ************************************ 00:03:58.949 START TEST rpc_trace_cmd_test 00:03:58.949 ************************************ 00:03:58.949 17:58:56 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:58.949 17:58:56 -- rpc/rpc.sh@40 -- # local info 00:03:58.949 17:58:56 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:58.949 17:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:58.949 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:03:58.949 17:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:58.949 17:58:56 -- rpc/rpc.sh@42 -- # info='{ 00:03:58.949 "bdev": { 00:03:58.949 "mask": "0x8", 00:03:58.949 "tpoint_mask": "0xffffffffffffffff" 00:03:58.949 }, 00:03:58.949 "bdev_nvme": { 00:03:58.949 "mask": "0x4000", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "blobfs": { 00:03:58.949 "mask": "0x80", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "dsa": { 00:03:58.949 "mask": "0x200", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "ftl": { 00:03:58.949 "mask": "0x40", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "iaa": { 00:03:58.949 "mask": "0x1000", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "iscsi_conn": { 00:03:58.949 "mask": "0x2", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "nvme_pcie": { 00:03:58.949 "mask": "0x800", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "nvme_tcp": { 00:03:58.949 "mask": "0x2000", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "nvmf_rdma": { 00:03:58.949 "mask": "0x10", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.949 }, 00:03:58.949 "nvmf_tcp": { 00:03:58.949 "mask": "0x20", 00:03:58.949 "tpoint_mask": "0x0" 00:03:58.950 }, 00:03:58.950 "scsi": { 00:03:58.950 "mask": "0x4", 00:03:58.950 "tpoint_mask": "0x0" 00:03:58.950 }, 00:03:58.950 "thread": { 00:03:58.950 "mask": "0x400", 00:03:58.950 "tpoint_mask": "0x0" 00:03:58.950 }, 00:03:58.950 "tpoint_group_mask": "0x8", 00:03:58.950 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55596" 00:03:58.950 }' 00:03:58.950 17:58:56 -- rpc/rpc.sh@43 -- # jq length 00:03:58.950 17:58:56 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:58.950 17:58:56 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.950 17:58:56 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.950 17:58:56 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:59.207 17:58:56 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:59.207 17:58:56 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:59.207 17:58:56 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:59.207 17:58:56 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:59.207 17:58:57 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:59.207 00:03:59.207 real 0m0.270s 00:03:59.207 user 0m0.230s 00:03:59.207 sys 0m0.032s 00:03:59.207 17:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.207 ************************************ 00:03:59.207 END TEST rpc_trace_cmd_test 00:03:59.207 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.207 ************************************ 00:03:59.207 17:58:57 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:03:59.207 17:58:57 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:03:59.207 17:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.207 17:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.207 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.207 ************************************ 00:03:59.207 START TEST go_rpc 00:03:59.207 ************************************ 00:03:59.207 17:58:57 -- common/autotest_common.sh@1104 -- # go_rpc 00:03:59.207 17:58:57 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:59.207 17:58:57 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:03:59.207 17:58:57 -- rpc/rpc.sh@52 -- # jq length 00:03:59.207 17:58:57 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:03:59.207 17:58:57 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.207 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.207 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.207 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.207 17:58:57 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:03:59.207 17:58:57 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:59.465 17:58:57 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["5e200681-3fa8-4949-a938-7a1ebfbaa56e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"5e200681-3fa8-4949-a938-7a1ebfbaa56e","zoned":false}]' 00:03:59.465 17:58:57 -- rpc/rpc.sh@57 -- # jq length 00:03:59.465 17:58:57 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:03:59.465 17:58:57 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.465 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.465 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.465 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.465 17:58:57 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:59.465 17:58:57 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:03:59.465 17:58:57 -- rpc/rpc.sh@61 -- # jq length 00:03:59.465 17:58:57 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:03:59.465 00:03:59.465 real 0m0.222s 00:03:59.465 user 0m0.151s 00:03:59.465 sys 0m0.033s 00:03:59.465 ************************************ 00:03:59.465 END TEST go_rpc 00:03:59.465 ************************************ 00:03:59.465 17:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.465 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.465 17:58:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:59.465 17:58:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:59.465 17:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.465 17:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.465 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.465 ************************************ 00:03:59.465 START TEST rpc_daemon_integrity 00:03:59.465 ************************************ 00:03:59.465 17:58:57 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:59.465 17:58:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.465 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.465 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.465 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.465 17:58:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.465 17:58:57 -- rpc/rpc.sh@13 -- # jq length 00:03:59.465 17:58:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.465 17:58:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.465 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.465 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.723 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.723 17:58:57 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:03:59.723 17:58:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.723 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.723 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.723 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.723 17:58:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.723 { 00:03:59.723 "aliases": [ 00:03:59.723 "6cd5e3d0-6940-47c1-856e-618e79645119" 00:03:59.723 ], 00:03:59.723 "assigned_rate_limits": { 00:03:59.723 "r_mbytes_per_sec": 0, 00:03:59.723 "rw_ios_per_sec": 0, 00:03:59.723 "rw_mbytes_per_sec": 0, 00:03:59.723 "w_mbytes_per_sec": 0 00:03:59.723 }, 00:03:59.723 "block_size": 512, 00:03:59.723 "claimed": false, 00:03:59.723 "driver_specific": {}, 00:03:59.723 "memory_domains": [ 00:03:59.723 { 00:03:59.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.723 "dma_device_type": 2 00:03:59.723 } 00:03:59.723 ], 00:03:59.723 "name": "Malloc3", 00:03:59.723 "num_blocks": 16384, 00:03:59.723 "product_name": "Malloc disk", 00:03:59.723 "supported_io_types": { 00:03:59.723 "abort": true, 00:03:59.723 "compare": false, 00:03:59.723 "compare_and_write": false, 00:03:59.723 "flush": true, 00:03:59.723 "nvme_admin": false, 00:03:59.723 "nvme_io": false, 00:03:59.723 "read": true, 00:03:59.723 "reset": true, 00:03:59.723 "unmap": true, 00:03:59.723 "write": true, 00:03:59.723 "write_zeroes": true 00:03:59.723 }, 00:03:59.723 "uuid": "6cd5e3d0-6940-47c1-856e-618e79645119", 00:03:59.723 "zoned": false 00:03:59.723 } 00:03:59.723 ]' 00:03:59.723 17:58:57 -- rpc/rpc.sh@17 -- # jq length 00:03:59.723 17:58:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.723 17:58:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:03:59.723 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.723 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.723 [2024-04-25 17:58:57.480510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:59.723 [2024-04-25 17:58:57.480574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.723 [2024-04-25 17:58:57.480597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11668c0 00:03:59.723 [2024-04-25 17:58:57.480607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.723 [2024-04-25 17:58:57.482105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.723 [2024-04-25 17:58:57.482155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.723 Passthru0 00:03:59.723 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.723 17:58:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.723 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.724 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.724 17:58:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.724 { 00:03:59.724 "aliases": [ 00:03:59.724 "6cd5e3d0-6940-47c1-856e-618e79645119" 00:03:59.724 ], 00:03:59.724 "assigned_rate_limits": { 00:03:59.724 "r_mbytes_per_sec": 0, 00:03:59.724 "rw_ios_per_sec": 0, 00:03:59.724 "rw_mbytes_per_sec": 0, 00:03:59.724 "w_mbytes_per_sec": 0 00:03:59.724 }, 00:03:59.724 "block_size": 512, 00:03:59.724 "claim_type": "exclusive_write", 00:03:59.724 "claimed": true, 00:03:59.724 "driver_specific": {}, 00:03:59.724 "memory_domains": [ 00:03:59.724 { 00:03:59.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.724 "dma_device_type": 2 00:03:59.724 } 00:03:59.724 ], 00:03:59.724 "name": "Malloc3", 00:03:59.724 "num_blocks": 16384, 00:03:59.724 "product_name": "Malloc disk", 00:03:59.724 "supported_io_types": { 00:03:59.724 "abort": true, 00:03:59.724 "compare": false, 00:03:59.724 "compare_and_write": false, 00:03:59.724 "flush": true, 00:03:59.724 "nvme_admin": false, 00:03:59.724 "nvme_io": false, 00:03:59.724 "read": true, 00:03:59.724 "reset": true, 00:03:59.724 "unmap": true, 00:03:59.724 "write": true, 00:03:59.724 "write_zeroes": true 00:03:59.724 }, 00:03:59.724 "uuid": "6cd5e3d0-6940-47c1-856e-618e79645119", 00:03:59.724 "zoned": false 00:03:59.724 }, 00:03:59.724 { 00:03:59.724 "aliases": [ 00:03:59.724 "4c5d0a77-995a-54c0-83cf-5b3ac79cdd31" 00:03:59.724 ], 00:03:59.724 "assigned_rate_limits": { 00:03:59.724 "r_mbytes_per_sec": 0, 00:03:59.724 "rw_ios_per_sec": 0, 00:03:59.724 "rw_mbytes_per_sec": 0, 00:03:59.724 "w_mbytes_per_sec": 0 00:03:59.724 }, 00:03:59.724 "block_size": 512, 00:03:59.724 "claimed": false, 00:03:59.724 "driver_specific": { 00:03:59.724 "passthru": { 00:03:59.724 "base_bdev_name": "Malloc3", 00:03:59.724 "name": "Passthru0" 00:03:59.724 } 00:03:59.724 }, 00:03:59.724 "memory_domains": [ 00:03:59.724 { 00:03:59.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.724 "dma_device_type": 2 00:03:59.724 } 00:03:59.724 ], 00:03:59.724 "name": "Passthru0", 00:03:59.724 "num_blocks": 16384, 00:03:59.724 "product_name": "passthru", 00:03:59.724 "supported_io_types": { 00:03:59.724 "abort": true, 00:03:59.724 "compare": false, 00:03:59.724 "compare_and_write": false, 00:03:59.724 "flush": true, 00:03:59.724 "nvme_admin": false, 00:03:59.724 "nvme_io": false, 00:03:59.724 "read": true, 00:03:59.724 "reset": true, 00:03:59.724 "unmap": true, 00:03:59.724 "write": true, 00:03:59.724 "write_zeroes": true 00:03:59.724 }, 00:03:59.724 "uuid": "4c5d0a77-995a-54c0-83cf-5b3ac79cdd31", 00:03:59.724 "zoned": false 00:03:59.724 } 00:03:59.724 ]' 00:03:59.724 17:58:57 -- rpc/rpc.sh@21 -- # jq length 00:03:59.724 17:58:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.724 17:58:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.724 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.724 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.724 17:58:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:03:59.724 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.724 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.724 17:58:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.724 17:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:59.724 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 17:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:59.724 17:58:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.724 17:58:57 -- rpc/rpc.sh@26 -- # jq length 00:03:59.724 17:58:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.724 00:03:59.724 real 0m0.311s 00:03:59.724 user 0m0.210s 00:03:59.724 sys 0m0.038s 00:03:59.724 17:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.724 17:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 ************************************ 00:03:59.724 END TEST rpc_daemon_integrity 00:03:59.724 ************************************ 00:03:59.982 17:58:57 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:59.982 17:58:57 -- rpc/rpc.sh@84 -- # killprocess 55596 00:03:59.982 17:58:57 -- common/autotest_common.sh@926 -- # '[' -z 55596 ']' 00:03:59.982 17:58:57 -- common/autotest_common.sh@930 -- # kill -0 55596 00:03:59.982 17:58:57 -- common/autotest_common.sh@931 -- # uname 00:03:59.982 17:58:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:59.982 17:58:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55596 00:03:59.982 17:58:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:59.982 killing process with pid 55596 00:03:59.982 17:58:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:59.982 17:58:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55596' 00:03:59.982 17:58:57 -- common/autotest_common.sh@945 -- # kill 55596 00:03:59.982 17:58:57 -- common/autotest_common.sh@950 -- # wait 55596 00:04:00.240 00:04:00.240 real 0m3.153s 00:04:00.240 user 0m4.126s 00:04:00.240 sys 0m0.787s 00:04:00.240 ************************************ 00:04:00.240 END TEST rpc 00:04:00.240 ************************************ 00:04:00.240 17:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.240 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.498 17:58:58 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:00.498 17:58:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.498 17:58:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.499 ************************************ 00:04:00.499 START TEST rpc_client 00:04:00.499 ************************************ 00:04:00.499 17:58:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:00.499 * Looking for test storage... 00:04:00.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:00.499 17:58:58 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:00.499 OK 00:04:00.499 17:58:58 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:00.499 00:04:00.499 real 0m0.104s 00:04:00.499 user 0m0.054s 00:04:00.499 sys 0m0.055s 00:04:00.499 17:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.499 ************************************ 00:04:00.499 END TEST rpc_client 00:04:00.499 ************************************ 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.499 17:58:58 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:00.499 17:58:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.499 17:58:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.499 ************************************ 00:04:00.499 START TEST json_config 00:04:00.499 ************************************ 00:04:00.499 17:58:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:00.499 17:58:58 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:00.499 17:58:58 -- nvmf/common.sh@7 -- # uname -s 00:04:00.499 17:58:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.499 17:58:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.499 17:58:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.499 17:58:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.499 17:58:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.499 17:58:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.499 17:58:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.499 17:58:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.499 17:58:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.499 17:58:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.499 17:58:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:04:00.499 17:58:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:04:00.499 17:58:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.499 17:58:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.499 17:58:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.499 17:58:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:00.499 17:58:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.499 17:58:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.499 17:58:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.499 17:58:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.499 17:58:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.499 17:58:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.499 17:58:58 -- paths/export.sh@5 -- # export PATH 00:04:00.499 17:58:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.499 17:58:58 -- nvmf/common.sh@46 -- # : 0 00:04:00.499 17:58:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:00.499 17:58:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:00.499 17:58:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:00.499 17:58:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.499 17:58:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.499 17:58:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:00.499 17:58:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:00.499 17:58:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:00.499 17:58:58 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:00.499 17:58:58 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:00.499 17:58:58 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:00.499 17:58:58 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:00.499 17:58:58 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:00.499 17:58:58 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:00.499 17:58:58 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:00.499 17:58:58 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:00.499 17:58:58 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:00.499 17:58:58 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:00.499 17:58:58 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.499 INFO: JSON configuration test init 00:04:00.499 17:58:58 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:00.499 17:58:58 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:00.499 17:58:58 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:00.499 17:58:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.499 17:58:58 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:00.499 17:58:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.499 17:58:58 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:00.499 17:58:58 -- json_config/json_config.sh@98 -- # local app=target 00:04:00.499 17:58:58 -- json_config/json_config.sh@99 -- # shift 00:04:00.499 17:58:58 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:00.499 17:58:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:00.499 17:58:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:00.499 Waiting for target to run... 00:04:00.499 17:58:58 -- json_config/json_config.sh@111 -- # app_pid[$app]=55907 00:04:00.499 17:58:58 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:00.499 17:58:58 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:00.499 17:58:58 -- json_config/json_config.sh@114 -- # waitforlisten 55907 /var/tmp/spdk_tgt.sock 00:04:00.499 17:58:58 -- common/autotest_common.sh@819 -- # '[' -z 55907 ']' 00:04:00.499 17:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.499 17:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:00.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.499 17:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.499 17:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:00.499 17:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.758 [2024-04-25 17:58:58.494447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:00.758 [2024-04-25 17:58:58.494565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55907 ] 00:04:01.017 [2024-04-25 17:58:58.922869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.275 [2024-04-25 17:58:59.013594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:01.275 [2024-04-25 17:58:59.013800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.852 17:58:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:01.852 17:58:59 -- common/autotest_common.sh@852 -- # return 0 00:04:01.852 00:04:01.852 17:58:59 -- json_config/json_config.sh@115 -- # echo '' 00:04:01.852 17:58:59 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:01.852 17:58:59 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:01.852 17:58:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:01.853 17:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.853 17:58:59 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:01.853 17:58:59 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:01.853 17:58:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:01.853 17:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.853 17:58:59 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:01.853 17:58:59 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:01.853 17:58:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:02.111 17:59:00 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:02.111 17:59:00 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:02.111 17:59:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.111 17:59:00 -- common/autotest_common.sh@10 -- # set +x 00:04:02.111 17:59:00 -- json_config/json_config.sh@48 -- # local ret=0 00:04:02.111 17:59:00 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:02.111 17:59:00 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:02.111 17:59:00 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:02.111 17:59:00 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:02.111 17:59:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:02.676 17:59:00 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:02.676 17:59:00 -- json_config/json_config.sh@51 -- # local get_types 00:04:02.676 17:59:00 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:02.676 17:59:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:02.676 17:59:00 -- common/autotest_common.sh@10 -- # set +x 00:04:02.676 17:59:00 -- json_config/json_config.sh@58 -- # return 0 00:04:02.676 17:59:00 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:02.676 17:59:00 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:02.676 17:59:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.676 17:59:00 -- common/autotest_common.sh@10 -- # set +x 00:04:02.676 17:59:00 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:02.676 17:59:00 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:02.676 17:59:00 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.676 17:59:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.935 MallocForNvmf0 00:04:02.935 17:59:00 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:02.935 17:59:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:03.193 MallocForNvmf1 00:04:03.193 17:59:00 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:03.193 17:59:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:03.451 [2024-04-25 17:59:01.177740] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.451 17:59:01 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:03.451 17:59:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:03.709 17:59:01 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:03.709 17:59:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:03.709 17:59:01 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.709 17:59:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.968 17:59:01 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.968 17:59:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:04.227 [2024-04-25 17:59:02.034383] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:04.227 17:59:02 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:04.227 17:59:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:04.227 17:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:04.227 17:59:02 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:04.227 17:59:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:04.227 17:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:04.227 17:59:02 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:04.227 17:59:02 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:04.227 17:59:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:04.503 MallocBdevForConfigChangeCheck 00:04:04.503 17:59:02 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:04.503 17:59:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:04.503 17:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:04.503 17:59:02 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:04.503 17:59:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.074 INFO: shutting down applications... 00:04:05.074 17:59:02 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:05.074 17:59:02 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:05.074 17:59:02 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:05.074 17:59:02 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:05.074 17:59:02 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:05.339 Calling clear_iscsi_subsystem 00:04:05.339 Calling clear_nvmf_subsystem 00:04:05.339 Calling clear_nbd_subsystem 00:04:05.339 Calling clear_ublk_subsystem 00:04:05.339 Calling clear_vhost_blk_subsystem 00:04:05.339 Calling clear_vhost_scsi_subsystem 00:04:05.339 Calling clear_scheduler_subsystem 00:04:05.339 Calling clear_bdev_subsystem 00:04:05.339 Calling clear_accel_subsystem 00:04:05.339 Calling clear_vmd_subsystem 00:04:05.339 Calling clear_sock_subsystem 00:04:05.339 Calling clear_iobuf_subsystem 00:04:05.339 17:59:03 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:05.339 17:59:03 -- json_config/json_config.sh@396 -- # count=100 00:04:05.339 17:59:03 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:05.339 17:59:03 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.339 17:59:03 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:05.339 17:59:03 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:05.598 17:59:03 -- json_config/json_config.sh@398 -- # break 00:04:05.598 17:59:03 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:05.598 17:59:03 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:05.598 17:59:03 -- json_config/json_config.sh@120 -- # local app=target 00:04:05.598 17:59:03 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:05.598 17:59:03 -- json_config/json_config.sh@124 -- # [[ -n 55907 ]] 00:04:05.598 17:59:03 -- json_config/json_config.sh@127 -- # kill -SIGINT 55907 00:04:05.598 17:59:03 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:05.598 17:59:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:05.598 17:59:03 -- json_config/json_config.sh@130 -- # kill -0 55907 00:04:05.598 17:59:03 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:06.164 17:59:03 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:06.164 17:59:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:06.164 17:59:03 -- json_config/json_config.sh@130 -- # kill -0 55907 00:04:06.164 17:59:03 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:06.164 17:59:03 -- json_config/json_config.sh@132 -- # break 00:04:06.164 17:59:03 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:06.164 17:59:03 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:06.164 SPDK target shutdown done 00:04:06.164 INFO: relaunching applications... 00:04:06.164 17:59:03 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:06.164 17:59:03 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:06.164 17:59:03 -- json_config/json_config.sh@98 -- # local app=target 00:04:06.164 17:59:03 -- json_config/json_config.sh@99 -- # shift 00:04:06.164 17:59:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:06.164 17:59:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:06.165 17:59:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:06.165 17:59:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:06.165 17:59:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:06.165 17:59:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=56176 00:04:06.165 Waiting for target to run... 00:04:06.165 17:59:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:06.165 17:59:03 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:06.165 17:59:03 -- json_config/json_config.sh@114 -- # waitforlisten 56176 /var/tmp/spdk_tgt.sock 00:04:06.165 17:59:03 -- common/autotest_common.sh@819 -- # '[' -z 56176 ']' 00:04:06.165 17:59:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.165 17:59:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:06.165 17:59:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.165 17:59:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:06.165 17:59:03 -- common/autotest_common.sh@10 -- # set +x 00:04:06.165 [2024-04-25 17:59:04.050439] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:06.165 [2024-04-25 17:59:04.050539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56176 ] 00:04:06.732 [2024-04-25 17:59:04.469573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.732 [2024-04-25 17:59:04.559192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:06.732 [2024-04-25 17:59:04.559417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.996 [2024-04-25 17:59:04.869173] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.996 [2024-04-25 17:59:04.901252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.254 17:59:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:07.254 00:04:07.254 17:59:04 -- common/autotest_common.sh@852 -- # return 0 00:04:07.254 17:59:04 -- json_config/json_config.sh@115 -- # echo '' 00:04:07.254 17:59:04 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:07.254 INFO: Checking if target configuration is the same... 00:04:07.254 17:59:04 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:07.254 17:59:05 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:07.254 17:59:05 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.254 17:59:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.254 + '[' 2 -ne 2 ']' 00:04:07.254 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:07.254 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:07.254 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:07.254 +++ basename /dev/fd/62 00:04:07.254 ++ mktemp /tmp/62.XXX 00:04:07.254 + tmp_file_1=/tmp/62.06N 00:04:07.254 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.254 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.254 + tmp_file_2=/tmp/spdk_tgt_config.json.WC7 00:04:07.254 + ret=0 00:04:07.254 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:07.512 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:07.771 + diff -u /tmp/62.06N /tmp/spdk_tgt_config.json.WC7 00:04:07.771 INFO: JSON config files are the same 00:04:07.771 + echo 'INFO: JSON config files are the same' 00:04:07.771 + rm /tmp/62.06N /tmp/spdk_tgt_config.json.WC7 00:04:07.771 + exit 0 00:04:07.771 17:59:05 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:07.771 INFO: changing configuration and checking if this can be detected... 00:04:07.771 17:59:05 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:07.771 17:59:05 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:07.771 17:59:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.029 17:59:05 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:08.029 17:59:05 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:08.029 17:59:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.029 + '[' 2 -ne 2 ']' 00:04:08.029 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:08.029 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:08.029 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:08.029 +++ basename /dev/fd/62 00:04:08.029 ++ mktemp /tmp/62.XXX 00:04:08.030 + tmp_file_1=/tmp/62.Ajn 00:04:08.030 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:08.030 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.030 + tmp_file_2=/tmp/spdk_tgt_config.json.rUX 00:04:08.030 + ret=0 00:04:08.030 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:08.288 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:08.288 + diff -u /tmp/62.Ajn /tmp/spdk_tgt_config.json.rUX 00:04:08.288 + ret=1 00:04:08.288 + echo '=== Start of file: /tmp/62.Ajn ===' 00:04:08.288 + cat /tmp/62.Ajn 00:04:08.288 + echo '=== End of file: /tmp/62.Ajn ===' 00:04:08.288 + echo '' 00:04:08.288 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rUX ===' 00:04:08.288 + cat /tmp/spdk_tgt_config.json.rUX 00:04:08.288 + echo '=== End of file: /tmp/spdk_tgt_config.json.rUX ===' 00:04:08.288 + echo '' 00:04:08.288 + rm /tmp/62.Ajn /tmp/spdk_tgt_config.json.rUX 00:04:08.288 + exit 1 00:04:08.288 INFO: configuration change detected. 00:04:08.288 17:59:06 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:08.288 17:59:06 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:08.288 17:59:06 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:08.288 17:59:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:08.288 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.288 17:59:06 -- json_config/json_config.sh@360 -- # local ret=0 00:04:08.288 17:59:06 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:08.288 17:59:06 -- json_config/json_config.sh@370 -- # [[ -n 56176 ]] 00:04:08.288 17:59:06 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:08.288 17:59:06 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:08.288 17:59:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:08.288 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.288 17:59:06 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:08.288 17:59:06 -- json_config/json_config.sh@246 -- # uname -s 00:04:08.288 17:59:06 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:08.288 17:59:06 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:08.547 17:59:06 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:08.547 17:59:06 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:08.547 17:59:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:08.547 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.547 17:59:06 -- json_config/json_config.sh@376 -- # killprocess 56176 00:04:08.547 17:59:06 -- common/autotest_common.sh@926 -- # '[' -z 56176 ']' 00:04:08.547 17:59:06 -- common/autotest_common.sh@930 -- # kill -0 56176 00:04:08.547 17:59:06 -- common/autotest_common.sh@931 -- # uname 00:04:08.547 17:59:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:08.547 17:59:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56176 00:04:08.547 killing process with pid 56176 00:04:08.547 17:59:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:08.547 17:59:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:08.547 17:59:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56176' 00:04:08.547 17:59:06 -- common/autotest_common.sh@945 -- # kill 56176 00:04:08.547 17:59:06 -- common/autotest_common.sh@950 -- # wait 56176 00:04:08.805 17:59:06 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:08.805 17:59:06 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:08.805 17:59:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:08.805 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.805 INFO: Success 00:04:08.805 17:59:06 -- json_config/json_config.sh@381 -- # return 0 00:04:08.805 17:59:06 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:08.805 00:04:08.805 real 0m8.277s 00:04:08.805 user 0m11.785s 00:04:08.805 sys 0m1.825s 00:04:08.805 ************************************ 00:04:08.805 END TEST json_config 00:04:08.805 ************************************ 00:04:08.805 17:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.805 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.805 17:59:06 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:08.805 17:59:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.805 17:59:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.805 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.805 ************************************ 00:04:08.805 START TEST json_config_extra_key 00:04:08.805 ************************************ 00:04:08.805 17:59:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:08.805 17:59:06 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.805 17:59:06 -- nvmf/common.sh@7 -- # uname -s 00:04:08.805 17:59:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.805 17:59:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.805 17:59:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.805 17:59:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.805 17:59:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.805 17:59:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.805 17:59:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.805 17:59:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.805 17:59:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.805 17:59:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.805 17:59:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:04:08.805 17:59:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:04:08.805 17:59:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.805 17:59:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.805 17:59:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.805 17:59:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.064 17:59:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.064 17:59:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.064 17:59:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.064 17:59:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.064 17:59:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.064 17:59:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.064 17:59:06 -- paths/export.sh@5 -- # export PATH 00:04:09.064 17:59:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.064 17:59:06 -- nvmf/common.sh@46 -- # : 0 00:04:09.064 17:59:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:09.064 17:59:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:09.064 17:59:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:09.064 17:59:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.064 17:59:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.064 17:59:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:09.064 17:59:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:09.064 17:59:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:09.064 INFO: launching applications... 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:09.064 Waiting for target to run... 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56351 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56351 /var/tmp/spdk_tgt.sock 00:04:09.064 17:59:06 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:09.064 17:59:06 -- common/autotest_common.sh@819 -- # '[' -z 56351 ']' 00:04:09.064 17:59:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.064 17:59:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:09.064 17:59:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.064 17:59:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:09.064 17:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.064 [2024-04-25 17:59:06.823572] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:09.064 [2024-04-25 17:59:06.823691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56351 ] 00:04:09.330 [2024-04-25 17:59:07.252040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.590 [2024-04-25 17:59:07.339170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:09.590 [2024-04-25 17:59:07.339364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.154 00:04:10.154 INFO: shutting down applications... 00:04:10.154 17:59:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:10.154 17:59:07 -- common/autotest_common.sh@852 -- # return 0 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56351 ]] 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56351 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56351 00:04:10.154 17:59:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56351 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:10.412 SPDK target shutdown done 00:04:10.412 17:59:08 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:10.412 Success 00:04:10.412 ************************************ 00:04:10.412 END TEST json_config_extra_key 00:04:10.412 ************************************ 00:04:10.412 00:04:10.412 real 0m1.644s 00:04:10.412 user 0m1.590s 00:04:10.412 sys 0m0.461s 00:04:10.412 17:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.412 17:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.670 17:59:08 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:10.670 17:59:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.670 17:59:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.670 17:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.670 ************************************ 00:04:10.670 START TEST alias_rpc 00:04:10.670 ************************************ 00:04:10.670 17:59:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:10.670 * Looking for test storage... 00:04:10.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:10.670 17:59:08 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:10.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.670 17:59:08 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56421 00:04:10.670 17:59:08 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56421 00:04:10.670 17:59:08 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:10.670 17:59:08 -- common/autotest_common.sh@819 -- # '[' -z 56421 ']' 00:04:10.670 17:59:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.670 17:59:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:10.670 17:59:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.670 17:59:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:10.670 17:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.670 [2024-04-25 17:59:08.511233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:10.670 [2024-04-25 17:59:08.511377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56421 ] 00:04:10.928 [2024-04-25 17:59:08.650301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.928 [2024-04-25 17:59:08.758129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:10.928 [2024-04-25 17:59:08.758281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.862 17:59:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:11.862 17:59:09 -- common/autotest_common.sh@852 -- # return 0 00:04:11.862 17:59:09 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:12.120 17:59:09 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56421 00:04:12.120 17:59:09 -- common/autotest_common.sh@926 -- # '[' -z 56421 ']' 00:04:12.120 17:59:09 -- common/autotest_common.sh@930 -- # kill -0 56421 00:04:12.120 17:59:09 -- common/autotest_common.sh@931 -- # uname 00:04:12.120 17:59:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:12.120 17:59:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56421 00:04:12.120 killing process with pid 56421 00:04:12.120 17:59:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:12.120 17:59:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:12.120 17:59:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56421' 00:04:12.120 17:59:09 -- common/autotest_common.sh@945 -- # kill 56421 00:04:12.120 17:59:09 -- common/autotest_common.sh@950 -- # wait 56421 00:04:12.686 ************************************ 00:04:12.686 END TEST alias_rpc 00:04:12.686 ************************************ 00:04:12.686 00:04:12.686 real 0m1.949s 00:04:12.686 user 0m2.234s 00:04:12.686 sys 0m0.483s 00:04:12.686 17:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.686 17:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.686 17:59:10 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:04:12.686 17:59:10 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.686 17:59:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.686 17:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.686 17:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.686 ************************************ 00:04:12.686 START TEST dpdk_mem_utility 00:04:12.686 ************************************ 00:04:12.686 17:59:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.686 * Looking for test storage... 00:04:12.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:12.686 17:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:12.686 17:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56512 00:04:12.686 17:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.686 17:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56512 00:04:12.686 17:59:10 -- common/autotest_common.sh@819 -- # '[' -z 56512 ']' 00:04:12.686 17:59:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.686 17:59:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:12.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.686 17:59:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.686 17:59:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:12.686 17:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.686 [2024-04-25 17:59:10.508315] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:12.686 [2024-04-25 17:59:10.508424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56512 ] 00:04:12.944 [2024-04-25 17:59:10.643372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.944 [2024-04-25 17:59:10.771216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:12.944 [2024-04-25 17:59:10.771455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.880 17:59:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:13.880 17:59:11 -- common/autotest_common.sh@852 -- # return 0 00:04:13.880 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:13.880 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:13.880 17:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:13.880 17:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:13.880 { 00:04:13.880 "filename": "/tmp/spdk_mem_dump.txt" 00:04:13.880 } 00:04:13.880 17:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:13.880 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:13.880 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:13.880 1 heaps totaling size 814.000000 MiB 00:04:13.880 size: 814.000000 MiB heap id: 0 00:04:13.880 end heaps---------- 00:04:13.880 8 mempools totaling size 598.116089 MiB 00:04:13.880 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:13.880 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:13.880 size: 84.521057 MiB name: bdev_io_56512 00:04:13.880 size: 51.011292 MiB name: evtpool_56512 00:04:13.880 size: 50.003479 MiB name: msgpool_56512 00:04:13.880 size: 21.763794 MiB name: PDU_Pool 00:04:13.880 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:13.880 size: 0.026123 MiB name: Session_Pool 00:04:13.880 end mempools------- 00:04:13.880 6 memzones totaling size 4.142822 MiB 00:04:13.880 size: 1.000366 MiB name: RG_ring_0_56512 00:04:13.880 size: 1.000366 MiB name: RG_ring_1_56512 00:04:13.880 size: 1.000366 MiB name: RG_ring_4_56512 00:04:13.880 size: 1.000366 MiB name: RG_ring_5_56512 00:04:13.880 size: 0.125366 MiB name: RG_ring_2_56512 00:04:13.880 size: 0.015991 MiB name: RG_ring_3_56512 00:04:13.880 end memzones------- 00:04:13.880 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:13.880 heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15 00:04:13.880 list of free elements. size: 12.486206 MiB 00:04:13.880 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:13.880 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:13.880 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:13.880 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:13.880 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:13.880 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:13.880 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:13.880 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:13.880 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:13.880 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:04:13.880 element at address: 0x20000b200000 with size: 0.489441 MiB 00:04:13.880 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:13.880 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:13.880 element at address: 0x200027e00000 with size: 0.398132 MiB 00:04:13.880 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:13.880 list of standard malloc elements. size: 199.251221 MiB 00:04:13.880 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:13.880 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:13.880 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:13.880 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:13.880 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:13.880 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:13.880 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:13.880 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:13.880 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:13.880 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:13.880 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:13.881 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:13.881 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:13.882 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:13.882 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:13.882 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:13.882 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:13.882 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:13.882 list of memzone associated elements. size: 602.262573 MiB 00:04:13.882 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:13.882 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:13.882 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:13.882 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:13.882 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:13.882 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56512_0 00:04:13.882 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:13.882 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56512_0 00:04:13.882 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:13.882 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56512_0 00:04:13.882 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:13.882 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:13.882 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:13.882 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:13.882 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:13.882 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56512 00:04:13.882 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:13.882 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56512 00:04:13.882 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:13.882 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56512 00:04:13.882 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:13.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:13.882 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:13.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:13.882 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:13.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:13.882 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:13.882 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:13.882 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:13.882 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56512 00:04:13.882 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:13.882 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56512 00:04:13.882 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:13.882 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56512 00:04:13.882 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:13.882 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56512 00:04:13.882 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:13.882 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56512 00:04:13.882 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:13.882 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:13.882 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:13.882 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:13.882 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:13.882 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:13.882 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:13.882 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56512 00:04:13.882 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:13.882 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:13.882 element at address: 0x200027e66040 with size: 0.023743 MiB 00:04:13.882 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:13.882 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:13.882 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56512 00:04:13.882 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:04:13.882 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:13.882 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:13.882 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56512 00:04:13.882 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:13.882 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56512 00:04:13.882 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:04:13.882 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:13.882 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:13.882 17:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56512 00:04:13.882 17:59:11 -- common/autotest_common.sh@926 -- # '[' -z 56512 ']' 00:04:13.882 17:59:11 -- common/autotest_common.sh@930 -- # kill -0 56512 00:04:13.882 17:59:11 -- common/autotest_common.sh@931 -- # uname 00:04:13.882 17:59:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:13.882 17:59:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56512 00:04:13.882 17:59:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:13.882 17:59:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:13.882 17:59:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56512' 00:04:13.882 killing process with pid 56512 00:04:13.882 17:59:11 -- common/autotest_common.sh@945 -- # kill 56512 00:04:13.882 17:59:11 -- common/autotest_common.sh@950 -- # wait 56512 00:04:14.448 00:04:14.448 real 0m1.774s 00:04:14.448 user 0m1.918s 00:04:14.448 sys 0m0.454s 00:04:14.448 ************************************ 00:04:14.448 END TEST dpdk_mem_utility 00:04:14.448 ************************************ 00:04:14.448 17:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.448 17:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.448 17:59:12 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.448 17:59:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.448 17:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.448 17:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.448 ************************************ 00:04:14.448 START TEST event 00:04:14.448 ************************************ 00:04:14.448 17:59:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.448 * Looking for test storage... 00:04:14.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:14.448 17:59:12 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:14.448 17:59:12 -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.448 17:59:12 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.448 17:59:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:14.448 17:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.448 17:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.448 ************************************ 00:04:14.448 START TEST event_perf 00:04:14.448 ************************************ 00:04:14.448 17:59:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.448 Running I/O for 1 seconds...[2024-04-25 17:59:12.315032] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:14.448 [2024-04-25 17:59:12.316031] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56606 ] 00:04:14.707 [2024-04-25 17:59:12.458301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.707 [2024-04-25 17:59:12.588215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.707 [2024-04-25 17:59:12.588374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.707 [2024-04-25 17:59:12.588462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.707 [2024-04-25 17:59:12.588469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.083 Running I/O for 1 seconds... 00:04:16.083 lcore 0: 187306 00:04:16.083 lcore 1: 187306 00:04:16.083 lcore 2: 187306 00:04:16.083 lcore 3: 187306 00:04:16.083 done. 00:04:16.083 00:04:16.083 ************************************ 00:04:16.083 END TEST event_perf 00:04:16.083 ************************************ 00:04:16.083 real 0m1.410s 00:04:16.083 user 0m4.219s 00:04:16.083 sys 0m0.066s 00:04:16.083 17:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.083 17:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:16.083 17:59:13 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:16.083 17:59:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:16.083 17:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.083 17:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:16.083 ************************************ 00:04:16.083 START TEST event_reactor 00:04:16.083 ************************************ 00:04:16.083 17:59:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:16.083 [2024-04-25 17:59:13.781333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:16.083 [2024-04-25 17:59:13.781423] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56645 ] 00:04:16.083 [2024-04-25 17:59:13.914038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.341 [2024-04-25 17:59:14.037484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.277 test_start 00:04:17.277 oneshot 00:04:17.277 tick 100 00:04:17.277 tick 100 00:04:17.277 tick 250 00:04:17.277 tick 100 00:04:17.277 tick 100 00:04:17.277 tick 100 00:04:17.277 tick 250 00:04:17.277 tick 500 00:04:17.277 tick 100 00:04:17.277 tick 100 00:04:17.277 tick 250 00:04:17.277 tick 100 00:04:17.277 tick 100 00:04:17.277 test_end 00:04:17.277 00:04:17.277 real 0m1.382s 00:04:17.277 user 0m1.219s 00:04:17.277 sys 0m0.057s 00:04:17.277 17:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.277 17:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.277 ************************************ 00:04:17.277 END TEST event_reactor 00:04:17.277 ************************************ 00:04:17.277 17:59:15 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.277 17:59:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:17.277 17:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.277 17:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.277 ************************************ 00:04:17.277 START TEST event_reactor_perf 00:04:17.277 ************************************ 00:04:17.277 17:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.277 [2024-04-25 17:59:15.209452] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:17.277 [2024-04-25 17:59:15.209545] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56680 ] 00:04:17.536 [2024-04-25 17:59:15.347048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.536 [2024-04-25 17:59:15.466788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.911 test_start 00:04:18.911 test_end 00:04:18.911 Performance: 360447 events per second 00:04:18.911 00:04:18.911 real 0m1.387s 00:04:18.911 user 0m1.227s 00:04:18.911 sys 0m0.053s 00:04:18.911 17:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.911 ************************************ 00:04:18.911 END TEST event_reactor_perf 00:04:18.911 ************************************ 00:04:18.911 17:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.911 17:59:16 -- event/event.sh@49 -- # uname -s 00:04:18.911 17:59:16 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:18.912 17:59:16 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:18.912 17:59:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.912 17:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.912 17:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.912 ************************************ 00:04:18.912 START TEST event_scheduler 00:04:18.912 ************************************ 00:04:18.912 17:59:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:18.912 * Looking for test storage... 00:04:18.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:18.912 17:59:16 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:18.912 17:59:16 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56735 00:04:18.912 17:59:16 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:18.912 17:59:16 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.912 17:59:16 -- scheduler/scheduler.sh@37 -- # waitforlisten 56735 00:04:18.912 17:59:16 -- common/autotest_common.sh@819 -- # '[' -z 56735 ']' 00:04:18.912 17:59:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.912 17:59:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:18.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.912 17:59:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.912 17:59:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:18.912 17:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.912 [2024-04-25 17:59:16.791171] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:18.912 [2024-04-25 17:59:16.791364] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56735 ] 00:04:19.170 [2024-04-25 17:59:16.947075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.170 [2024-04-25 17:59:17.100351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.170 [2024-04-25 17:59:17.100485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.170 [2024-04-25 17:59:17.101350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:19.170 [2024-04-25 17:59:17.101357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.106 17:59:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:20.106 17:59:17 -- common/autotest_common.sh@852 -- # return 0 00:04:20.106 17:59:17 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.106 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.106 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.106 POWER: Env isn't set yet! 00:04:20.106 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:20.107 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.107 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.107 POWER: Attempting to initialise PSTAT power management... 00:04:20.107 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.107 POWER: Cannot set governor of lcore 0 to performance 00:04:20.107 POWER: Attempting to initialise AMD PSTATE power management... 00:04:20.107 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.107 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.107 POWER: Attempting to initialise CPPC power management... 00:04:20.107 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.107 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.107 POWER: Attempting to initialise VM power management... 00:04:20.107 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:20.107 POWER: Unable to set Power Management Environment for lcore 0 00:04:20.107 [2024-04-25 17:59:17.786728] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:20.107 [2024-04-25 17:59:17.786743] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:20.107 [2024-04-25 17:59:17.786752] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 [2024-04-25 17:59:17.928373] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.107 17:59:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.107 17:59:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 ************************************ 00:04:20.107 START TEST scheduler_create_thread 00:04:20.107 ************************************ 00:04:20.107 17:59:17 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 2 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 3 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 4 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 5 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 6 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 7 00:04:20.107 17:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:17 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.107 17:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 8 00:04:20.107 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:18 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.107 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 9 00:04:20.107 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:18 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.107 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 10 00:04:20.107 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:18 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.107 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.107 17:59:18 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.107 17:59:18 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.107 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.107 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.365 17:59:18 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:20.365 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.365 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.931 17:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.931 17:59:18 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:20.931 17:59:18 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:20.931 17:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.931 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:21.866 17:59:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.866 00:04:21.866 real 0m1.751s 00:04:21.866 user 0m0.025s 00:04:21.866 sys 0m0.001s 00:04:21.866 17:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.866 ************************************ 00:04:21.866 END TEST scheduler_create_thread 00:04:21.866 ************************************ 00:04:21.866 17:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.866 17:59:19 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:21.866 17:59:19 -- scheduler/scheduler.sh@46 -- # killprocess 56735 00:04:21.866 17:59:19 -- common/autotest_common.sh@926 -- # '[' -z 56735 ']' 00:04:21.866 17:59:19 -- common/autotest_common.sh@930 -- # kill -0 56735 00:04:21.866 17:59:19 -- common/autotest_common.sh@931 -- # uname 00:04:21.866 17:59:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:21.866 17:59:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56735 00:04:21.866 17:59:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:21.866 17:59:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:21.866 killing process with pid 56735 00:04:21.866 17:59:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56735' 00:04:21.866 17:59:19 -- common/autotest_common.sh@945 -- # kill 56735 00:04:21.866 17:59:19 -- common/autotest_common.sh@950 -- # wait 56735 00:04:22.433 [2024-04-25 17:59:20.172112] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.692 00:04:22.692 real 0m3.908s 00:04:22.692 user 0m6.721s 00:04:22.692 sys 0m0.467s 00:04:22.692 17:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.692 17:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.692 ************************************ 00:04:22.692 END TEST event_scheduler 00:04:22.692 ************************************ 00:04:22.692 17:59:20 -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.692 17:59:20 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.692 17:59:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.692 17:59:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.692 17:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.692 ************************************ 00:04:22.692 START TEST app_repeat 00:04:22.692 ************************************ 00:04:22.692 17:59:20 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:22.692 17:59:20 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.692 17:59:20 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.692 17:59:20 -- event/event.sh@13 -- # local nbd_list 00:04:22.692 17:59:20 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.692 17:59:20 -- event/event.sh@14 -- # local bdev_list 00:04:22.692 17:59:20 -- event/event.sh@15 -- # local repeat_times=4 00:04:22.692 17:59:20 -- event/event.sh@17 -- # modprobe nbd 00:04:22.692 17:59:20 -- event/event.sh@19 -- # repeat_pid=56847 00:04:22.692 17:59:20 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.692 Process app_repeat pid: 56847 00:04:22.692 17:59:20 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56847' 00:04:22.692 17:59:20 -- event/event.sh@23 -- # for i in {0..2} 00:04:22.692 17:59:20 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.692 spdk_app_start Round 0 00:04:22.692 17:59:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.692 17:59:20 -- event/event.sh@25 -- # waitforlisten 56847 /var/tmp/spdk-nbd.sock 00:04:22.692 17:59:20 -- common/autotest_common.sh@819 -- # '[' -z 56847 ']' 00:04:22.692 17:59:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.692 17:59:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:22.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.692 17:59:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.692 17:59:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:22.692 17:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.952 [2024-04-25 17:59:20.634471] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:22.952 [2024-04-25 17:59:20.634555] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56847 ] 00:04:22.952 [2024-04-25 17:59:20.774202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.210 [2024-04-25 17:59:20.895380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.210 [2024-04-25 17:59:20.895390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.777 17:59:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:23.777 17:59:21 -- common/autotest_common.sh@852 -- # return 0 00:04:23.777 17:59:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.036 Malloc0 00:04:24.036 17:59:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.295 Malloc1 00:04:24.295 17:59:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@12 -- # local i 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.295 17:59:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.555 /dev/nbd0 00:04:24.555 17:59:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.555 17:59:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.555 17:59:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:24.555 17:59:22 -- common/autotest_common.sh@857 -- # local i 00:04:24.555 17:59:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:24.555 17:59:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:24.555 17:59:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:24.555 17:59:22 -- common/autotest_common.sh@861 -- # break 00:04:24.555 17:59:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:24.555 17:59:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:24.555 17:59:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.555 1+0 records in 00:04:24.555 1+0 records out 00:04:24.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349732 s, 11.7 MB/s 00:04:24.555 17:59:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.814 17:59:22 -- common/autotest_common.sh@874 -- # size=4096 00:04:24.814 17:59:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.814 17:59:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:24.814 17:59:22 -- common/autotest_common.sh@877 -- # return 0 00:04:24.814 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.814 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.814 17:59:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.814 /dev/nbd1 00:04:24.814 17:59:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.814 17:59:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.814 17:59:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:24.814 17:59:22 -- common/autotest_common.sh@857 -- # local i 00:04:24.814 17:59:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:24.814 17:59:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:24.814 17:59:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:24.814 17:59:22 -- common/autotest_common.sh@861 -- # break 00:04:24.814 17:59:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:24.814 17:59:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:24.814 17:59:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.814 1+0 records in 00:04:24.814 1+0 records out 00:04:24.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250252 s, 16.4 MB/s 00:04:25.073 17:59:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.073 17:59:22 -- common/autotest_common.sh@874 -- # size=4096 00:04:25.073 17:59:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.073 17:59:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:25.073 17:59:22 -- common/autotest_common.sh@877 -- # return 0 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.073 { 00:04:25.073 "bdev_name": "Malloc0", 00:04:25.073 "nbd_device": "/dev/nbd0" 00:04:25.073 }, 00:04:25.073 { 00:04:25.073 "bdev_name": "Malloc1", 00:04:25.073 "nbd_device": "/dev/nbd1" 00:04:25.073 } 00:04:25.073 ]' 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.073 { 00:04:25.073 "bdev_name": "Malloc0", 00:04:25.073 "nbd_device": "/dev/nbd0" 00:04:25.073 }, 00:04:25.073 { 00:04:25.073 "bdev_name": "Malloc1", 00:04:25.073 "nbd_device": "/dev/nbd1" 00:04:25.073 } 00:04:25.073 ]' 00:04:25.073 17:59:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.332 /dev/nbd1' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.332 /dev/nbd1' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.332 256+0 records in 00:04:25.332 256+0 records out 00:04:25.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074799 s, 140 MB/s 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.332 256+0 records in 00:04:25.332 256+0 records out 00:04:25.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251879 s, 41.6 MB/s 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.332 256+0 records in 00:04:25.332 256+0 records out 00:04:25.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315305 s, 33.3 MB/s 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@51 -- # local i 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.332 17:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@41 -- # break 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.601 17:59:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@41 -- # break 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.864 17:59:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.123 17:59:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.123 17:59:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.123 17:59:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@65 -- # true 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.123 17:59:24 -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.123 17:59:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.382 17:59:24 -- event/event.sh@35 -- # sleep 3 00:04:26.640 [2024-04-25 17:59:24.529502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.898 [2024-04-25 17:59:24.623170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.898 [2024-04-25 17:59:24.623175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.898 [2024-04-25 17:59:24.681035] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.898 [2024-04-25 17:59:24.681104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.428 17:59:27 -- event/event.sh@23 -- # for i in {0..2} 00:04:29.428 spdk_app_start Round 1 00:04:29.428 17:59:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:29.428 17:59:27 -- event/event.sh@25 -- # waitforlisten 56847 /var/tmp/spdk-nbd.sock 00:04:29.428 17:59:27 -- common/autotest_common.sh@819 -- # '[' -z 56847 ']' 00:04:29.428 17:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.428 17:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:29.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.428 17:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.428 17:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:29.428 17:59:27 -- common/autotest_common.sh@10 -- # set +x 00:04:29.686 17:59:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:29.686 17:59:27 -- common/autotest_common.sh@852 -- # return 0 00:04:29.686 17:59:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.945 Malloc0 00:04:29.945 17:59:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.204 Malloc1 00:04:30.204 17:59:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@12 -- # local i 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.204 17:59:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.463 /dev/nbd0 00:04:30.463 17:59:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.463 17:59:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.463 17:59:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:30.463 17:59:28 -- common/autotest_common.sh@857 -- # local i 00:04:30.463 17:59:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:30.463 17:59:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:30.463 17:59:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:30.463 17:59:28 -- common/autotest_common.sh@861 -- # break 00:04:30.463 17:59:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:30.463 17:59:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:30.463 17:59:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.463 1+0 records in 00:04:30.463 1+0 records out 00:04:30.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336112 s, 12.2 MB/s 00:04:30.463 17:59:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.463 17:59:28 -- common/autotest_common.sh@874 -- # size=4096 00:04:30.463 17:59:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.463 17:59:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:30.463 17:59:28 -- common/autotest_common.sh@877 -- # return 0 00:04:30.463 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.463 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.463 17:59:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.721 /dev/nbd1 00:04:30.721 17:59:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.721 17:59:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.721 17:59:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:30.721 17:59:28 -- common/autotest_common.sh@857 -- # local i 00:04:30.721 17:59:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:30.721 17:59:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:30.721 17:59:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:30.721 17:59:28 -- common/autotest_common.sh@861 -- # break 00:04:30.721 17:59:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:30.721 17:59:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:30.721 17:59:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.721 1+0 records in 00:04:30.721 1+0 records out 00:04:30.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363041 s, 11.3 MB/s 00:04:30.721 17:59:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.721 17:59:28 -- common/autotest_common.sh@874 -- # size=4096 00:04:30.721 17:59:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.721 17:59:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:30.721 17:59:28 -- common/autotest_common.sh@877 -- # return 0 00:04:30.721 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.721 17:59:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.980 { 00:04:30.980 "bdev_name": "Malloc0", 00:04:30.980 "nbd_device": "/dev/nbd0" 00:04:30.980 }, 00:04:30.980 { 00:04:30.980 "bdev_name": "Malloc1", 00:04:30.980 "nbd_device": "/dev/nbd1" 00:04:30.980 } 00:04:30.980 ]' 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.980 { 00:04:30.980 "bdev_name": "Malloc0", 00:04:30.980 "nbd_device": "/dev/nbd0" 00:04:30.980 }, 00:04:30.980 { 00:04:30.980 "bdev_name": "Malloc1", 00:04:30.980 "nbd_device": "/dev/nbd1" 00:04:30.980 } 00:04:30.980 ]' 00:04:30.980 17:59:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.240 /dev/nbd1' 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.240 /dev/nbd1' 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.240 256+0 records in 00:04:31.240 256+0 records out 00:04:31.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010536 s, 99.5 MB/s 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.240 256+0 records in 00:04:31.240 256+0 records out 00:04:31.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252984 s, 41.4 MB/s 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.240 17:59:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.240 256+0 records in 00:04:31.240 256+0 records out 00:04:31.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305773 s, 34.3 MB/s 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@51 -- # local i 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.240 17:59:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@41 -- # break 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.498 17:59:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@41 -- # break 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.756 17:59:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@65 -- # true 00:04:32.014 17:59:29 -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.015 17:59:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.015 17:59:29 -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.015 17:59:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.015 17:59:29 -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.015 17:59:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.582 17:59:30 -- event/event.sh@35 -- # sleep 3 00:04:32.582 [2024-04-25 17:59:30.456310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.840 [2024-04-25 17:59:30.569458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.840 [2024-04-25 17:59:30.569458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.840 [2024-04-25 17:59:30.624146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.840 [2024-04-25 17:59:30.624203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.395 17:59:33 -- event/event.sh@23 -- # for i in {0..2} 00:04:35.395 spdk_app_start Round 2 00:04:35.395 17:59:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:35.395 17:59:33 -- event/event.sh@25 -- # waitforlisten 56847 /var/tmp/spdk-nbd.sock 00:04:35.395 17:59:33 -- common/autotest_common.sh@819 -- # '[' -z 56847 ']' 00:04:35.395 17:59:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.395 17:59:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:35.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.395 17:59:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.395 17:59:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:35.395 17:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:35.652 17:59:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:35.652 17:59:33 -- common/autotest_common.sh@852 -- # return 0 00:04:35.652 17:59:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.911 Malloc0 00:04:35.911 17:59:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.170 Malloc1 00:04:36.170 17:59:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@12 -- # local i 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.170 17:59:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.428 /dev/nbd0 00:04:36.428 17:59:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.428 17:59:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.428 17:59:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:36.428 17:59:34 -- common/autotest_common.sh@857 -- # local i 00:04:36.428 17:59:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:36.428 17:59:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:36.428 17:59:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:36.428 17:59:34 -- common/autotest_common.sh@861 -- # break 00:04:36.428 17:59:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:36.428 17:59:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:36.428 17:59:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.428 1+0 records in 00:04:36.428 1+0 records out 00:04:36.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284625 s, 14.4 MB/s 00:04:36.428 17:59:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.428 17:59:34 -- common/autotest_common.sh@874 -- # size=4096 00:04:36.428 17:59:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.428 17:59:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:36.428 17:59:34 -- common/autotest_common.sh@877 -- # return 0 00:04:36.428 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.428 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.428 17:59:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.687 /dev/nbd1 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.687 17:59:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:36.687 17:59:34 -- common/autotest_common.sh@857 -- # local i 00:04:36.687 17:59:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:36.687 17:59:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:36.687 17:59:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:36.687 17:59:34 -- common/autotest_common.sh@861 -- # break 00:04:36.687 17:59:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:36.687 17:59:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:36.687 17:59:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.687 1+0 records in 00:04:36.687 1+0 records out 00:04:36.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339185 s, 12.1 MB/s 00:04:36.687 17:59:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.687 17:59:34 -- common/autotest_common.sh@874 -- # size=4096 00:04:36.687 17:59:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.687 17:59:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:36.687 17:59:34 -- common/autotest_common.sh@877 -- # return 0 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.687 17:59:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.945 17:59:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.945 { 00:04:36.945 "bdev_name": "Malloc0", 00:04:36.945 "nbd_device": "/dev/nbd0" 00:04:36.945 }, 00:04:36.945 { 00:04:36.945 "bdev_name": "Malloc1", 00:04:36.945 "nbd_device": "/dev/nbd1" 00:04:36.945 } 00:04:36.945 ]' 00:04:36.945 17:59:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.945 { 00:04:36.945 "bdev_name": "Malloc0", 00:04:36.946 "nbd_device": "/dev/nbd0" 00:04:36.946 }, 00:04:36.946 { 00:04:36.946 "bdev_name": "Malloc1", 00:04:36.946 "nbd_device": "/dev/nbd1" 00:04:36.946 } 00:04:36.946 ]' 00:04:36.946 17:59:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.204 /dev/nbd1' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.204 /dev/nbd1' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.204 256+0 records in 00:04:37.204 256+0 records out 00:04:37.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769006 s, 136 MB/s 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.204 256+0 records in 00:04:37.204 256+0 records out 00:04:37.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024805 s, 42.3 MB/s 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.204 256+0 records in 00:04:37.204 256+0 records out 00:04:37.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257358 s, 40.7 MB/s 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.204 17:59:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.204 17:59:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@51 -- # local i 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.205 17:59:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@41 -- # break 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.464 17:59:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.722 17:59:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.722 17:59:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.722 17:59:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.722 17:59:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.722 17:59:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@41 -- # break 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.723 17:59:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.981 17:59:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.981 17:59:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.981 17:59:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@65 -- # true 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.239 17:59:35 -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.239 17:59:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.503 17:59:36 -- event/event.sh@35 -- # sleep 3 00:04:38.503 [2024-04-25 17:59:36.417579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.767 [2024-04-25 17:59:36.500919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.767 [2024-04-25 17:59:36.500930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.767 [2024-04-25 17:59:36.559800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.767 [2024-04-25 17:59:36.559859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.299 17:59:39 -- event/event.sh@38 -- # waitforlisten 56847 /var/tmp/spdk-nbd.sock 00:04:41.299 17:59:39 -- common/autotest_common.sh@819 -- # '[' -z 56847 ']' 00:04:41.299 17:59:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.299 17:59:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.299 17:59:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.299 17:59:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.299 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:41.557 17:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:41.557 17:59:39 -- common/autotest_common.sh@852 -- # return 0 00:04:41.557 17:59:39 -- event/event.sh@39 -- # killprocess 56847 00:04:41.557 17:59:39 -- common/autotest_common.sh@926 -- # '[' -z 56847 ']' 00:04:41.557 17:59:39 -- common/autotest_common.sh@930 -- # kill -0 56847 00:04:41.557 17:59:39 -- common/autotest_common.sh@931 -- # uname 00:04:41.557 17:59:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:41.557 17:59:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56847 00:04:41.816 killing process with pid 56847 00:04:41.816 17:59:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:41.816 17:59:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:41.816 17:59:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56847' 00:04:41.816 17:59:39 -- common/autotest_common.sh@945 -- # kill 56847 00:04:41.816 17:59:39 -- common/autotest_common.sh@950 -- # wait 56847 00:04:41.816 spdk_app_start is called in Round 0. 00:04:41.816 Shutdown signal received, stop current app iteration 00:04:41.816 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:41.816 spdk_app_start is called in Round 1. 00:04:41.816 Shutdown signal received, stop current app iteration 00:04:41.816 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:41.816 spdk_app_start is called in Round 2. 00:04:41.816 Shutdown signal received, stop current app iteration 00:04:41.816 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:41.816 spdk_app_start is called in Round 3. 00:04:41.816 Shutdown signal received, stop current app iteration 00:04:41.816 17:59:39 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:41.816 17:59:39 -- event/event.sh@42 -- # return 0 00:04:41.816 00:04:41.816 real 0m19.139s 00:04:41.816 user 0m42.649s 00:04:41.816 sys 0m3.114s 00:04:41.816 17:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.816 ************************************ 00:04:41.816 END TEST app_repeat 00:04:41.816 ************************************ 00:04:41.816 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.075 17:59:39 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.075 17:59:39 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:42.075 17:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.075 17:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.075 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.075 ************************************ 00:04:42.075 START TEST cpu_locks 00:04:42.075 ************************************ 00:04:42.075 17:59:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:42.075 * Looking for test storage... 00:04:42.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:42.075 17:59:39 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.075 17:59:39 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.075 17:59:39 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.076 17:59:39 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.076 17:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.076 17:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.076 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.076 ************************************ 00:04:42.076 START TEST default_locks 00:04:42.076 ************************************ 00:04:42.076 17:59:39 -- common/autotest_common.sh@1104 -- # default_locks 00:04:42.076 17:59:39 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57471 00:04:42.076 17:59:39 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.076 17:59:39 -- event/cpu_locks.sh@47 -- # waitforlisten 57471 00:04:42.076 17:59:39 -- common/autotest_common.sh@819 -- # '[' -z 57471 ']' 00:04:42.076 17:59:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.076 17:59:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.076 17:59:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.076 17:59:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.076 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.076 [2024-04-25 17:59:39.965649] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:42.076 [2024-04-25 17:59:39.965954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:04:42.334 [2024-04-25 17:59:40.107866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.334 [2024-04-25 17:59:40.226748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.334 [2024-04-25 17:59:40.226933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.276 17:59:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.276 17:59:40 -- common/autotest_common.sh@852 -- # return 0 00:04:43.276 17:59:40 -- event/cpu_locks.sh@49 -- # locks_exist 57471 00:04:43.276 17:59:40 -- event/cpu_locks.sh@22 -- # lslocks -p 57471 00:04:43.276 17:59:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.534 17:59:41 -- event/cpu_locks.sh@50 -- # killprocess 57471 00:04:43.534 17:59:41 -- common/autotest_common.sh@926 -- # '[' -z 57471 ']' 00:04:43.534 17:59:41 -- common/autotest_common.sh@930 -- # kill -0 57471 00:04:43.534 17:59:41 -- common/autotest_common.sh@931 -- # uname 00:04:43.534 17:59:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:43.534 17:59:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57471 00:04:43.534 killing process with pid 57471 00:04:43.534 17:59:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:43.534 17:59:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:43.534 17:59:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57471' 00:04:43.534 17:59:41 -- common/autotest_common.sh@945 -- # kill 57471 00:04:43.534 17:59:41 -- common/autotest_common.sh@950 -- # wait 57471 00:04:44.102 17:59:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57471 00:04:44.102 17:59:41 -- common/autotest_common.sh@640 -- # local es=0 00:04:44.102 17:59:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57471 00:04:44.102 17:59:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:44.102 17:59:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.102 17:59:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:44.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.102 17:59:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.102 17:59:41 -- common/autotest_common.sh@643 -- # waitforlisten 57471 00:04:44.102 17:59:41 -- common/autotest_common.sh@819 -- # '[' -z 57471 ']' 00:04:44.102 17:59:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.102 17:59:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.102 17:59:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.102 17:59:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.102 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.102 ERROR: process (pid: 57471) is no longer running 00:04:44.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57471) - No such process 00:04:44.102 17:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.102 17:59:41 -- common/autotest_common.sh@852 -- # return 1 00:04:44.102 17:59:41 -- common/autotest_common.sh@643 -- # es=1 00:04:44.102 17:59:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:44.102 17:59:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:44.102 17:59:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:44.102 17:59:41 -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.102 17:59:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.102 ************************************ 00:04:44.102 END TEST default_locks 00:04:44.102 ************************************ 00:04:44.102 17:59:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.102 17:59:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.102 00:04:44.102 real 0m1.951s 00:04:44.102 user 0m2.045s 00:04:44.102 sys 0m0.614s 00:04:44.102 17:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.102 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.102 17:59:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.102 17:59:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.102 17:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.102 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.102 ************************************ 00:04:44.102 START TEST default_locks_via_rpc 00:04:44.102 ************************************ 00:04:44.102 17:59:41 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:44.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.102 17:59:41 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57535 00:04:44.102 17:59:41 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.102 17:59:41 -- event/cpu_locks.sh@63 -- # waitforlisten 57535 00:04:44.102 17:59:41 -- common/autotest_common.sh@819 -- # '[' -z 57535 ']' 00:04:44.102 17:59:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.102 17:59:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.102 17:59:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.102 17:59:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.102 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.102 [2024-04-25 17:59:41.954194] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:44.102 [2024-04-25 17:59:41.954323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57535 ] 00:04:44.361 [2024-04-25 17:59:42.088924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.361 [2024-04-25 17:59:42.204075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.361 [2024-04-25 17:59:42.204267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.296 17:59:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.296 17:59:42 -- common/autotest_common.sh@852 -- # return 0 00:04:45.296 17:59:42 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:45.296 17:59:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.296 17:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.296 17:59:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.296 17:59:42 -- event/cpu_locks.sh@67 -- # no_locks 00:04:45.296 17:59:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.296 17:59:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.296 17:59:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.296 17:59:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.296 17:59:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.296 17:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.296 17:59:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.296 17:59:42 -- event/cpu_locks.sh@71 -- # locks_exist 57535 00:04:45.296 17:59:42 -- event/cpu_locks.sh@22 -- # lslocks -p 57535 00:04:45.296 17:59:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.555 17:59:43 -- event/cpu_locks.sh@73 -- # killprocess 57535 00:04:45.555 17:59:43 -- common/autotest_common.sh@926 -- # '[' -z 57535 ']' 00:04:45.555 17:59:43 -- common/autotest_common.sh@930 -- # kill -0 57535 00:04:45.555 17:59:43 -- common/autotest_common.sh@931 -- # uname 00:04:45.555 17:59:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:45.555 17:59:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57535 00:04:45.555 killing process with pid 57535 00:04:45.555 17:59:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:45.555 17:59:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:45.555 17:59:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57535' 00:04:45.555 17:59:43 -- common/autotest_common.sh@945 -- # kill 57535 00:04:45.555 17:59:43 -- common/autotest_common.sh@950 -- # wait 57535 00:04:46.123 ************************************ 00:04:46.123 END TEST default_locks_via_rpc 00:04:46.123 ************************************ 00:04:46.123 00:04:46.123 real 0m1.884s 00:04:46.123 user 0m2.029s 00:04:46.123 sys 0m0.593s 00:04:46.123 17:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.123 17:59:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.123 17:59:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.123 17:59:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.123 17:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.123 17:59:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.123 ************************************ 00:04:46.123 START TEST non_locking_app_on_locked_coremask 00:04:46.123 ************************************ 00:04:46.123 17:59:43 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:46.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.123 17:59:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57604 00:04:46.123 17:59:43 -- event/cpu_locks.sh@81 -- # waitforlisten 57604 /var/tmp/spdk.sock 00:04:46.123 17:59:43 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.123 17:59:43 -- common/autotest_common.sh@819 -- # '[' -z 57604 ']' 00:04:46.123 17:59:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.123 17:59:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.123 17:59:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.123 17:59:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.123 17:59:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.123 [2024-04-25 17:59:43.898367] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:46.123 [2024-04-25 17:59:43.898471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57604 ] 00:04:46.123 [2024-04-25 17:59:44.037065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.382 [2024-04-25 17:59:44.138498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.382 [2024-04-25 17:59:44.138675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.318 17:59:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.318 17:59:44 -- common/autotest_common.sh@852 -- # return 0 00:04:47.318 17:59:44 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57632 00:04:47.318 17:59:44 -- event/cpu_locks.sh@85 -- # waitforlisten 57632 /var/tmp/spdk2.sock 00:04:47.318 17:59:44 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.318 17:59:44 -- common/autotest_common.sh@819 -- # '[' -z 57632 ']' 00:04:47.318 17:59:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.318 17:59:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.318 17:59:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.318 17:59:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.318 17:59:44 -- common/autotest_common.sh@10 -- # set +x 00:04:47.318 [2024-04-25 17:59:45.013397] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:47.318 [2024-04-25 17:59:45.014126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57632 ] 00:04:47.318 [2024-04-25 17:59:45.159467] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.318 [2024-04-25 17:59:45.159533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.579 [2024-04-25 17:59:45.415474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.579 [2024-04-25 17:59:45.415691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.147 17:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.147 17:59:46 -- common/autotest_common.sh@852 -- # return 0 00:04:48.147 17:59:46 -- event/cpu_locks.sh@87 -- # locks_exist 57604 00:04:48.147 17:59:46 -- event/cpu_locks.sh@22 -- # lslocks -p 57604 00:04:48.147 17:59:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.084 17:59:46 -- event/cpu_locks.sh@89 -- # killprocess 57604 00:04:49.084 17:59:46 -- common/autotest_common.sh@926 -- # '[' -z 57604 ']' 00:04:49.084 17:59:46 -- common/autotest_common.sh@930 -- # kill -0 57604 00:04:49.084 17:59:46 -- common/autotest_common.sh@931 -- # uname 00:04:49.084 17:59:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.084 17:59:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57604 00:04:49.084 17:59:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.084 killing process with pid 57604 00:04:49.084 17:59:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.084 17:59:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57604' 00:04:49.084 17:59:46 -- common/autotest_common.sh@945 -- # kill 57604 00:04:49.084 17:59:46 -- common/autotest_common.sh@950 -- # wait 57604 00:04:49.652 17:59:47 -- event/cpu_locks.sh@90 -- # killprocess 57632 00:04:49.652 17:59:47 -- common/autotest_common.sh@926 -- # '[' -z 57632 ']' 00:04:49.652 17:59:47 -- common/autotest_common.sh@930 -- # kill -0 57632 00:04:49.652 17:59:47 -- common/autotest_common.sh@931 -- # uname 00:04:49.652 17:59:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.652 17:59:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57632 00:04:49.911 17:59:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.911 killing process with pid 57632 00:04:49.911 17:59:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.911 17:59:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57632' 00:04:49.911 17:59:47 -- common/autotest_common.sh@945 -- # kill 57632 00:04:49.911 17:59:47 -- common/autotest_common.sh@950 -- # wait 57632 00:04:50.170 00:04:50.170 real 0m4.190s 00:04:50.170 user 0m4.670s 00:04:50.170 sys 0m1.110s 00:04:50.170 17:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.170 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.170 ************************************ 00:04:50.170 END TEST non_locking_app_on_locked_coremask 00:04:50.170 ************************************ 00:04:50.170 17:59:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.170 17:59:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.170 17:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.170 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.170 ************************************ 00:04:50.170 START TEST locking_app_on_unlocked_coremask 00:04:50.170 ************************************ 00:04:50.170 17:59:48 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:04:50.170 17:59:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57711 00:04:50.170 17:59:48 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.170 17:59:48 -- event/cpu_locks.sh@99 -- # waitforlisten 57711 /var/tmp/spdk.sock 00:04:50.170 17:59:48 -- common/autotest_common.sh@819 -- # '[' -z 57711 ']' 00:04:50.170 17:59:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.170 17:59:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.170 17:59:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.170 17:59:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.170 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.429 [2024-04-25 17:59:48.145669] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:50.430 [2024-04-25 17:59:48.145786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57711 ] 00:04:50.430 [2024-04-25 17:59:48.281591] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.430 [2024-04-25 17:59:48.281644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.688 [2024-04-25 17:59:48.404516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.688 [2024-04-25 17:59:48.404689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.256 17:59:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.256 17:59:49 -- common/autotest_common.sh@852 -- # return 0 00:04:51.256 17:59:49 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57739 00:04:51.256 17:59:49 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.256 17:59:49 -- event/cpu_locks.sh@103 -- # waitforlisten 57739 /var/tmp/spdk2.sock 00:04:51.256 17:59:49 -- common/autotest_common.sh@819 -- # '[' -z 57739 ']' 00:04:51.256 17:59:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.256 17:59:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.256 17:59:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.256 17:59:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.256 17:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:51.256 [2024-04-25 17:59:49.170616] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:51.256 [2024-04-25 17:59:49.170717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57739 ] 00:04:51.516 [2024-04-25 17:59:49.310313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.774 [2024-04-25 17:59:49.497005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.774 [2024-04-25 17:59:49.497196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.380 17:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.380 17:59:50 -- common/autotest_common.sh@852 -- # return 0 00:04:52.380 17:59:50 -- event/cpu_locks.sh@105 -- # locks_exist 57739 00:04:52.380 17:59:50 -- event/cpu_locks.sh@22 -- # lslocks -p 57739 00:04:52.380 17:59:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.947 17:59:50 -- event/cpu_locks.sh@107 -- # killprocess 57711 00:04:52.947 17:59:50 -- common/autotest_common.sh@926 -- # '[' -z 57711 ']' 00:04:52.947 17:59:50 -- common/autotest_common.sh@930 -- # kill -0 57711 00:04:52.947 17:59:50 -- common/autotest_common.sh@931 -- # uname 00:04:52.947 17:59:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.947 17:59:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57711 00:04:52.947 killing process with pid 57711 00:04:52.947 17:59:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.947 17:59:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.947 17:59:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57711' 00:04:52.947 17:59:50 -- common/autotest_common.sh@945 -- # kill 57711 00:04:52.947 17:59:50 -- common/autotest_common.sh@950 -- # wait 57711 00:04:53.881 17:59:51 -- event/cpu_locks.sh@108 -- # killprocess 57739 00:04:53.881 17:59:51 -- common/autotest_common.sh@926 -- # '[' -z 57739 ']' 00:04:53.881 17:59:51 -- common/autotest_common.sh@930 -- # kill -0 57739 00:04:53.881 17:59:51 -- common/autotest_common.sh@931 -- # uname 00:04:53.881 17:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.881 17:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57739 00:04:53.881 17:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.881 17:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.881 killing process with pid 57739 00:04:53.881 17:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57739' 00:04:53.881 17:59:51 -- common/autotest_common.sh@945 -- # kill 57739 00:04:53.881 17:59:51 -- common/autotest_common.sh@950 -- # wait 57739 00:04:54.447 00:04:54.447 real 0m4.096s 00:04:54.447 user 0m4.486s 00:04:54.447 sys 0m1.046s 00:04:54.447 17:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.447 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.447 ************************************ 00:04:54.447 END TEST locking_app_on_unlocked_coremask 00:04:54.447 ************************************ 00:04:54.447 17:59:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:54.447 17:59:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.447 17:59:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.447 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.447 ************************************ 00:04:54.447 START TEST locking_app_on_locked_coremask 00:04:54.447 ************************************ 00:04:54.447 17:59:52 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:04:54.447 17:59:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57818 00:04:54.447 17:59:52 -- event/cpu_locks.sh@116 -- # waitforlisten 57818 /var/tmp/spdk.sock 00:04:54.447 17:59:52 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.447 17:59:52 -- common/autotest_common.sh@819 -- # '[' -z 57818 ']' 00:04:54.447 17:59:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.447 17:59:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.447 17:59:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.447 17:59:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.447 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.447 [2024-04-25 17:59:52.302847] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:54.447 [2024-04-25 17:59:52.302966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57818 ] 00:04:54.705 [2024-04-25 17:59:52.441793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.705 [2024-04-25 17:59:52.558553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.705 [2024-04-25 17:59:52.558757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.638 17:59:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.638 17:59:53 -- common/autotest_common.sh@852 -- # return 0 00:04:55.638 17:59:53 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.638 17:59:53 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57846 00:04:55.638 17:59:53 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57846 /var/tmp/spdk2.sock 00:04:55.638 17:59:53 -- common/autotest_common.sh@640 -- # local es=0 00:04:55.638 17:59:53 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57846 /var/tmp/spdk2.sock 00:04:55.638 17:59:53 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:55.638 17:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.638 17:59:53 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:55.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.638 17:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.638 17:59:53 -- common/autotest_common.sh@643 -- # waitforlisten 57846 /var/tmp/spdk2.sock 00:04:55.638 17:59:53 -- common/autotest_common.sh@819 -- # '[' -z 57846 ']' 00:04:55.638 17:59:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.638 17:59:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.638 17:59:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.638 17:59:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.639 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:04:55.639 [2024-04-25 17:59:53.382041] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:55.639 [2024-04-25 17:59:53.382180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57846 ] 00:04:55.639 [2024-04-25 17:59:53.524736] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57818 has claimed it. 00:04:55.639 [2024-04-25 17:59:53.524814] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.208 ERROR: process (pid: 57846) is no longer running 00:04:56.208 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57846) - No such process 00:04:56.208 17:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.208 17:59:54 -- common/autotest_common.sh@852 -- # return 1 00:04:56.208 17:59:54 -- common/autotest_common.sh@643 -- # es=1 00:04:56.208 17:59:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:56.208 17:59:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:56.208 17:59:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:56.208 17:59:54 -- event/cpu_locks.sh@122 -- # locks_exist 57818 00:04:56.208 17:59:54 -- event/cpu_locks.sh@22 -- # lslocks -p 57818 00:04:56.208 17:59:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.791 17:59:54 -- event/cpu_locks.sh@124 -- # killprocess 57818 00:04:56.791 17:59:54 -- common/autotest_common.sh@926 -- # '[' -z 57818 ']' 00:04:56.791 17:59:54 -- common/autotest_common.sh@930 -- # kill -0 57818 00:04:56.791 17:59:54 -- common/autotest_common.sh@931 -- # uname 00:04:56.791 17:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.791 17:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57818 00:04:56.791 killing process with pid 57818 00:04:56.791 17:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.791 17:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.791 17:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57818' 00:04:56.791 17:59:54 -- common/autotest_common.sh@945 -- # kill 57818 00:04:56.791 17:59:54 -- common/autotest_common.sh@950 -- # wait 57818 00:04:57.050 ************************************ 00:04:57.050 END TEST locking_app_on_locked_coremask 00:04:57.050 ************************************ 00:04:57.050 00:04:57.050 real 0m2.746s 00:04:57.050 user 0m3.204s 00:04:57.050 sys 0m0.642s 00:04:57.050 17:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.050 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.308 17:59:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:57.308 17:59:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.308 17:59:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.308 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.308 ************************************ 00:04:57.308 START TEST locking_overlapped_coremask 00:04:57.308 ************************************ 00:04:57.309 17:59:55 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:04:57.309 17:59:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57903 00:04:57.309 17:59:55 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:57.309 17:59:55 -- event/cpu_locks.sh@133 -- # waitforlisten 57903 /var/tmp/spdk.sock 00:04:57.309 17:59:55 -- common/autotest_common.sh@819 -- # '[' -z 57903 ']' 00:04:57.309 17:59:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.309 17:59:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.309 17:59:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.309 17:59:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.309 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.309 [2024-04-25 17:59:55.167882] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:57.309 [2024-04-25 17:59:55.167994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:04:57.567 [2024-04-25 17:59:55.307149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.567 [2024-04-25 17:59:55.418530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.567 [2024-04-25 17:59:55.418818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.567 [2024-04-25 17:59:55.419531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.567 [2024-04-25 17:59:55.419538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.505 17:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.505 17:59:56 -- common/autotest_common.sh@852 -- # return 0 00:04:58.505 17:59:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57933 00:04:58.505 17:59:56 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:58.505 17:59:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57933 /var/tmp/spdk2.sock 00:04:58.505 17:59:56 -- common/autotest_common.sh@640 -- # local es=0 00:04:58.505 17:59:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57933 /var/tmp/spdk2.sock 00:04:58.505 17:59:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:58.505 17:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:58.505 17:59:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:58.505 17:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:58.505 17:59:56 -- common/autotest_common.sh@643 -- # waitforlisten 57933 /var/tmp/spdk2.sock 00:04:58.505 17:59:56 -- common/autotest_common.sh@819 -- # '[' -z 57933 ']' 00:04:58.505 17:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.505 17:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.505 17:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.505 17:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.505 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:58.505 [2024-04-25 17:59:56.212551] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.505 [2024-04-25 17:59:56.212703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 00:04:58.505 [2024-04-25 17:59:56.357920] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57903 has claimed it. 00:04:58.505 [2024-04-25 17:59:56.357992] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.072 ERROR: process (pid: 57933) is no longer running 00:04:59.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57933) - No such process 00:04:59.073 17:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.073 17:59:56 -- common/autotest_common.sh@852 -- # return 1 00:04:59.073 17:59:56 -- common/autotest_common.sh@643 -- # es=1 00:04:59.073 17:59:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:59.073 17:59:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:59.073 17:59:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:59.073 17:59:56 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:59.073 17:59:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.073 17:59:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.073 17:59:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.073 17:59:56 -- event/cpu_locks.sh@141 -- # killprocess 57903 00:04:59.073 17:59:56 -- common/autotest_common.sh@926 -- # '[' -z 57903 ']' 00:04:59.073 17:59:56 -- common/autotest_common.sh@930 -- # kill -0 57903 00:04:59.073 17:59:56 -- common/autotest_common.sh@931 -- # uname 00:04:59.073 17:59:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:59.073 17:59:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57903 00:04:59.073 17:59:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:59.073 17:59:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:59.073 killing process with pid 57903 00:04:59.073 17:59:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57903' 00:04:59.073 17:59:56 -- common/autotest_common.sh@945 -- # kill 57903 00:04:59.073 17:59:56 -- common/autotest_common.sh@950 -- # wait 57903 00:04:59.641 00:04:59.641 real 0m2.355s 00:04:59.641 user 0m6.376s 00:04:59.641 sys 0m0.497s 00:04:59.641 17:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.641 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 ************************************ 00:04:59.641 END TEST locking_overlapped_coremask 00:04:59.641 ************************************ 00:04:59.641 17:59:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:59.641 17:59:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.641 17:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.641 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 ************************************ 00:04:59.641 START TEST locking_overlapped_coremask_via_rpc 00:04:59.641 ************************************ 00:04:59.641 17:59:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:04:59.641 17:59:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57979 00:04:59.641 17:59:57 -- event/cpu_locks.sh@149 -- # waitforlisten 57979 /var/tmp/spdk.sock 00:04:59.641 17:59:57 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:59.641 17:59:57 -- common/autotest_common.sh@819 -- # '[' -z 57979 ']' 00:04:59.641 17:59:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.641 17:59:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.641 17:59:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.641 17:59:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.641 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 [2024-04-25 17:59:57.497411] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:59.641 [2024-04-25 17:59:57.497492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57979 ] 00:04:59.900 [2024-04-25 17:59:57.631776] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.900 [2024-04-25 17:59:57.631834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.900 [2024-04-25 17:59:57.743079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.900 [2024-04-25 17:59:57.743396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.900 [2024-04-25 17:59:57.744053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.900 [2024-04-25 17:59:57.744114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.859 17:59:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.859 17:59:58 -- common/autotest_common.sh@852 -- # return 0 00:05:00.859 17:59:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58009 00:05:00.859 17:59:58 -- event/cpu_locks.sh@153 -- # waitforlisten 58009 /var/tmp/spdk2.sock 00:05:00.859 17:59:58 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:00.859 17:59:58 -- common/autotest_common.sh@819 -- # '[' -z 58009 ']' 00:05:00.859 17:59:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.859 17:59:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.859 17:59:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.859 17:59:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.859 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:00.859 [2024-04-25 17:59:58.496558] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:00.859 [2024-04-25 17:59:58.497179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:05:00.859 [2024-04-25 17:59:58.640226] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.859 [2024-04-25 17:59:58.640294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.117 [2024-04-25 17:59:58.881854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.117 [2024-04-25 17:59:58.882181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.117 [2024-04-25 17:59:58.886489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.117 [2024-04-25 17:59:58.886490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:01.684 17:59:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.684 17:59:59 -- common/autotest_common.sh@852 -- # return 0 00:05:01.684 17:59:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.684 17:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.684 17:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.684 17:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.684 17:59:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.684 17:59:59 -- common/autotest_common.sh@640 -- # local es=0 00:05:01.684 17:59:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.684 17:59:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:01.684 17:59:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:01.684 17:59:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:01.684 17:59:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:01.684 17:59:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.684 17:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.684 17:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.684 [2024-04-25 17:59:59.455411] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57979 has claimed it. 00:05:01.684 2024/04/25 17:59:59 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:01.684 request: 00:05:01.684 { 00:05:01.684 "method": "framework_enable_cpumask_locks", 00:05:01.684 "params": {} 00:05:01.684 } 00:05:01.684 Got JSON-RPC error response 00:05:01.684 GoRPCClient: error on JSON-RPC call 00:05:01.684 17:59:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:01.684 17:59:59 -- common/autotest_common.sh@643 -- # es=1 00:05:01.684 17:59:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:01.684 17:59:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:01.684 17:59:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:01.684 17:59:59 -- event/cpu_locks.sh@158 -- # waitforlisten 57979 /var/tmp/spdk.sock 00:05:01.684 17:59:59 -- common/autotest_common.sh@819 -- # '[' -z 57979 ']' 00:05:01.684 17:59:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.684 17:59:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.684 17:59:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.684 17:59:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.684 17:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.943 17:59:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.943 17:59:59 -- common/autotest_common.sh@852 -- # return 0 00:05:01.943 17:59:59 -- event/cpu_locks.sh@159 -- # waitforlisten 58009 /var/tmp/spdk2.sock 00:05:01.943 17:59:59 -- common/autotest_common.sh@819 -- # '[' -z 58009 ']' 00:05:01.943 17:59:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.943 17:59:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.943 17:59:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.943 17:59:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.943 17:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.201 18:00:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.201 18:00:00 -- common/autotest_common.sh@852 -- # return 0 00:05:02.201 18:00:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:02.201 18:00:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.201 18:00:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.202 18:00:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.202 00:05:02.202 real 0m2.613s 00:05:02.202 user 0m1.326s 00:05:02.202 sys 0m0.236s 00:05:02.202 18:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.202 18:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.202 ************************************ 00:05:02.202 END TEST locking_overlapped_coremask_via_rpc 00:05:02.202 ************************************ 00:05:02.202 18:00:00 -- event/cpu_locks.sh@174 -- # cleanup 00:05:02.202 18:00:00 -- event/cpu_locks.sh@15 -- # [[ -z 57979 ]] 00:05:02.202 18:00:00 -- event/cpu_locks.sh@15 -- # killprocess 57979 00:05:02.202 18:00:00 -- common/autotest_common.sh@926 -- # '[' -z 57979 ']' 00:05:02.202 18:00:00 -- common/autotest_common.sh@930 -- # kill -0 57979 00:05:02.202 18:00:00 -- common/autotest_common.sh@931 -- # uname 00:05:02.202 18:00:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.202 18:00:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57979 00:05:02.202 18:00:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:02.202 killing process with pid 57979 00:05:02.202 18:00:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:02.202 18:00:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57979' 00:05:02.202 18:00:00 -- common/autotest_common.sh@945 -- # kill 57979 00:05:02.202 18:00:00 -- common/autotest_common.sh@950 -- # wait 57979 00:05:02.768 18:00:00 -- event/cpu_locks.sh@16 -- # [[ -z 58009 ]] 00:05:02.768 18:00:00 -- event/cpu_locks.sh@16 -- # killprocess 58009 00:05:02.768 18:00:00 -- common/autotest_common.sh@926 -- # '[' -z 58009 ']' 00:05:02.768 18:00:00 -- common/autotest_common.sh@930 -- # kill -0 58009 00:05:02.768 18:00:00 -- common/autotest_common.sh@931 -- # uname 00:05:02.768 18:00:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.768 18:00:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58009 00:05:02.768 18:00:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:02.768 18:00:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:02.768 18:00:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58009' 00:05:02.768 killing process with pid 58009 00:05:02.768 18:00:00 -- common/autotest_common.sh@945 -- # kill 58009 00:05:02.768 18:00:00 -- common/autotest_common.sh@950 -- # wait 58009 00:05:03.334 18:00:00 -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.334 18:00:00 -- event/cpu_locks.sh@1 -- # cleanup 00:05:03.334 18:00:00 -- event/cpu_locks.sh@15 -- # [[ -z 57979 ]] 00:05:03.334 18:00:00 -- event/cpu_locks.sh@15 -- # killprocess 57979 00:05:03.334 18:00:01 -- common/autotest_common.sh@926 -- # '[' -z 57979 ']' 00:05:03.334 18:00:01 -- common/autotest_common.sh@930 -- # kill -0 57979 00:05:03.334 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (57979) - No such process 00:05:03.334 Process with pid 57979 is not found 00:05:03.335 18:00:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 57979 is not found' 00:05:03.335 18:00:01 -- event/cpu_locks.sh@16 -- # [[ -z 58009 ]] 00:05:03.335 Process with pid 58009 is not found 00:05:03.335 18:00:01 -- event/cpu_locks.sh@16 -- # killprocess 58009 00:05:03.335 18:00:01 -- common/autotest_common.sh@926 -- # '[' -z 58009 ']' 00:05:03.335 18:00:01 -- common/autotest_common.sh@930 -- # kill -0 58009 00:05:03.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58009) - No such process 00:05:03.335 18:00:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58009 is not found' 00:05:03.335 18:00:01 -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.335 ************************************ 00:05:03.335 END TEST cpu_locks 00:05:03.335 ************************************ 00:05:03.335 00:05:03.335 real 0m21.204s 00:05:03.335 user 0m36.645s 00:05:03.335 sys 0m5.609s 00:05:03.335 18:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.335 18:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.335 ************************************ 00:05:03.335 END TEST event 00:05:03.335 ************************************ 00:05:03.335 00:05:03.335 real 0m48.847s 00:05:03.335 user 1m32.810s 00:05:03.335 sys 0m9.620s 00:05:03.335 18:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.335 18:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.335 18:00:01 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:03.335 18:00:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.335 18:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.335 18:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.335 ************************************ 00:05:03.335 START TEST thread 00:05:03.335 ************************************ 00:05:03.335 18:00:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:03.335 * Looking for test storage... 00:05:03.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:03.335 18:00:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.335 18:00:01 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:03.335 18:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.335 18:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.335 ************************************ 00:05:03.335 START TEST thread_poller_perf 00:05:03.335 ************************************ 00:05:03.335 18:00:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.335 [2024-04-25 18:00:01.197116] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:03.335 [2024-04-25 18:00:01.197213] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:05:03.593 [2024-04-25 18:00:01.330674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.593 [2024-04-25 18:00:01.404049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.593 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:04.983 ====================================== 00:05:04.983 busy:2209722124 (cyc) 00:05:04.983 total_run_count: 327000 00:05:04.983 tsc_hz: 2200000000 (cyc) 00:05:04.983 ====================================== 00:05:04.983 poller_cost: 6757 (cyc), 3071 (nsec) 00:05:04.983 00:05:04.983 real 0m1.336s 00:05:04.983 user 0m1.182s 00:05:04.983 sys 0m0.048s 00:05:04.983 18:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.983 ************************************ 00:05:04.983 END TEST thread_poller_perf 00:05:04.983 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.983 ************************************ 00:05:04.983 18:00:02 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:04.983 18:00:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:04.983 18:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.983 18:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.983 ************************************ 00:05:04.983 START TEST thread_poller_perf 00:05:04.983 ************************************ 00:05:04.983 18:00:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:04.983 [2024-04-25 18:00:02.591571] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:04.983 [2024-04-25 18:00:02.591694] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:05:04.983 [2024-04-25 18:00:02.734088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.983 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:04.983 [2024-04-25 18:00:02.845342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.359 ====================================== 00:05:06.359 busy:2203453696 (cyc) 00:05:06.359 total_run_count: 4323000 00:05:06.359 tsc_hz: 2200000000 (cyc) 00:05:06.359 ====================================== 00:05:06.359 poller_cost: 509 (cyc), 231 (nsec) 00:05:06.359 00:05:06.359 real 0m1.393s 00:05:06.359 user 0m1.227s 00:05:06.359 sys 0m0.058s 00:05:06.359 18:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.359 ************************************ 00:05:06.359 END TEST thread_poller_perf 00:05:06.359 ************************************ 00:05:06.359 18:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.359 18:00:04 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.359 ************************************ 00:05:06.359 END TEST thread 00:05:06.359 ************************************ 00:05:06.359 00:05:06.359 real 0m2.916s 00:05:06.359 user 0m2.475s 00:05:06.359 sys 0m0.217s 00:05:06.359 18:00:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.359 18:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.359 18:00:04 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:06.359 18:00:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.359 18:00:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.359 18:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.359 ************************************ 00:05:06.359 START TEST accel 00:05:06.359 ************************************ 00:05:06.359 18:00:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:06.359 * Looking for test storage... 00:05:06.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:06.359 18:00:04 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:06.359 18:00:04 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:06.359 18:00:04 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.359 18:00:04 -- accel/accel.sh@59 -- # spdk_tgt_pid=58269 00:05:06.359 18:00:04 -- accel/accel.sh@60 -- # waitforlisten 58269 00:05:06.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.359 18:00:04 -- common/autotest_common.sh@819 -- # '[' -z 58269 ']' 00:05:06.359 18:00:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.359 18:00:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.359 18:00:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.359 18:00:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.359 18:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.359 18:00:04 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:06.359 18:00:04 -- accel/accel.sh@58 -- # build_accel_config 00:05:06.359 18:00:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.359 18:00:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.359 18:00:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.359 18:00:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.359 18:00:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.360 18:00:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.360 18:00:04 -- accel/accel.sh@42 -- # jq -r . 00:05:06.360 [2024-04-25 18:00:04.211920] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:06.360 [2024-04-25 18:00:04.212028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:05:06.616 [2024-04-25 18:00:04.352249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.616 [2024-04-25 18:00:04.494608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.616 [2024-04-25 18:00:04.494814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.550 18:00:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:07.550 18:00:05 -- common/autotest_common.sh@852 -- # return 0 00:05:07.550 18:00:05 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:07.550 18:00:05 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:07.550 18:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.550 18:00:05 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:07.550 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:07.550 18:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.550 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.550 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.550 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.551 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.551 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.551 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.551 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.551 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.551 18:00:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # IFS== 00:05:07.551 18:00:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:07.551 18:00:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:07.551 18:00:05 -- accel/accel.sh@67 -- # killprocess 58269 00:05:07.551 18:00:05 -- common/autotest_common.sh@926 -- # '[' -z 58269 ']' 00:05:07.551 18:00:05 -- common/autotest_common.sh@930 -- # kill -0 58269 00:05:07.551 18:00:05 -- common/autotest_common.sh@931 -- # uname 00:05:07.551 18:00:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:07.551 18:00:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58269 00:05:07.551 18:00:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:07.551 18:00:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:07.551 18:00:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58269' 00:05:07.551 killing process with pid 58269 00:05:07.551 18:00:05 -- common/autotest_common.sh@945 -- # kill 58269 00:05:07.551 18:00:05 -- common/autotest_common.sh@950 -- # wait 58269 00:05:07.808 18:00:05 -- accel/accel.sh@68 -- # trap - ERR 00:05:07.808 18:00:05 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:07.808 18:00:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:07.808 18:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.808 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:07.808 18:00:05 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:07.808 18:00:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:07.808 18:00:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.808 18:00:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.808 18:00:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.808 18:00:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.808 18:00:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.808 18:00:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.808 18:00:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.808 18:00:05 -- accel/accel.sh@42 -- # jq -r . 00:05:08.065 18:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.065 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:08.065 18:00:05 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:08.065 18:00:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:08.065 18:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.065 18:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:08.065 ************************************ 00:05:08.065 START TEST accel_missing_filename 00:05:08.065 ************************************ 00:05:08.066 18:00:05 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:08.066 18:00:05 -- common/autotest_common.sh@640 -- # local es=0 00:05:08.066 18:00:05 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:08.066 18:00:05 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:08.066 18:00:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.066 18:00:05 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:08.066 18:00:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.066 18:00:05 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:08.066 18:00:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:08.066 18:00:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.066 18:00:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.066 18:00:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.066 18:00:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.066 18:00:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.066 18:00:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.066 18:00:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.066 18:00:05 -- accel/accel.sh@42 -- # jq -r . 00:05:08.066 [2024-04-25 18:00:05.836511] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:08.066 [2024-04-25 18:00:05.836604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58339 ] 00:05:08.066 [2024-04-25 18:00:05.967004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.323 [2024-04-25 18:00:06.077053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.323 [2024-04-25 18:00:06.131858] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.323 [2024-04-25 18:00:06.208511] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:08.581 A filename is required. 00:05:08.581 18:00:06 -- common/autotest_common.sh@643 -- # es=234 00:05:08.581 18:00:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:08.581 18:00:06 -- common/autotest_common.sh@652 -- # es=106 00:05:08.581 ************************************ 00:05:08.581 END TEST accel_missing_filename 00:05:08.581 ************************************ 00:05:08.581 18:00:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:08.581 18:00:06 -- common/autotest_common.sh@660 -- # es=1 00:05:08.581 18:00:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:08.581 00:05:08.581 real 0m0.503s 00:05:08.581 user 0m0.339s 00:05:08.581 sys 0m0.104s 00:05:08.581 18:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.581 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:08.581 18:00:06 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:08.581 18:00:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:08.581 18:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.581 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:08.581 ************************************ 00:05:08.581 START TEST accel_compress_verify 00:05:08.581 ************************************ 00:05:08.581 18:00:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:08.581 18:00:06 -- common/autotest_common.sh@640 -- # local es=0 00:05:08.581 18:00:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:08.581 18:00:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:08.581 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.581 18:00:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:08.581 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.581 18:00:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:08.581 18:00:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:08.581 18:00:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.581 18:00:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.581 18:00:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.581 18:00:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.581 18:00:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.581 18:00:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.581 18:00:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.581 18:00:06 -- accel/accel.sh@42 -- # jq -r . 00:05:08.581 [2024-04-25 18:00:06.388420] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:08.581 [2024-04-25 18:00:06.388528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58363 ] 00:05:08.839 [2024-04-25 18:00:06.518713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.839 [2024-04-25 18:00:06.605790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.839 [2024-04-25 18:00:06.659746] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.839 [2024-04-25 18:00:06.736175] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:09.097 00:05:09.097 Compression does not support the verify option, aborting. 00:05:09.097 18:00:06 -- common/autotest_common.sh@643 -- # es=161 00:05:09.097 18:00:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:09.097 18:00:06 -- common/autotest_common.sh@652 -- # es=33 00:05:09.097 ************************************ 00:05:09.097 END TEST accel_compress_verify 00:05:09.097 ************************************ 00:05:09.097 18:00:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:09.097 18:00:06 -- common/autotest_common.sh@660 -- # es=1 00:05:09.097 18:00:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:09.097 00:05:09.097 real 0m0.478s 00:05:09.097 user 0m0.321s 00:05:09.097 sys 0m0.108s 00:05:09.097 18:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.097 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.097 18:00:06 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:09.098 18:00:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:09.098 18:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.098 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 ************************************ 00:05:09.098 START TEST accel_wrong_workload 00:05:09.098 ************************************ 00:05:09.098 18:00:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:09.098 18:00:06 -- common/autotest_common.sh@640 -- # local es=0 00:05:09.098 18:00:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:09.098 18:00:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.098 18:00:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:09.098 18:00:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:09.098 18:00:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.098 18:00:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.098 18:00:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.098 18:00:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.098 18:00:06 -- accel/accel.sh@42 -- # jq -r . 00:05:09.098 Unsupported workload type: foobar 00:05:09.098 [2024-04-25 18:00:06.912720] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:09.098 accel_perf options: 00:05:09.098 [-h help message] 00:05:09.098 [-q queue depth per core] 00:05:09.098 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:09.098 [-T number of threads per core 00:05:09.098 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:09.098 [-t time in seconds] 00:05:09.098 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:09.098 [ dif_verify, , dif_generate, dif_generate_copy 00:05:09.098 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:09.098 [-l for compress/decompress workloads, name of uncompressed input file 00:05:09.098 [-S for crc32c workload, use this seed value (default 0) 00:05:09.098 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:09.098 [-f for fill workload, use this BYTE value (default 255) 00:05:09.098 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:09.098 [-y verify result if this switch is on] 00:05:09.098 [-a tasks to allocate per core (default: same value as -q)] 00:05:09.098 Can be used to spread operations across a wider range of memory. 00:05:09.098 18:00:06 -- common/autotest_common.sh@643 -- # es=1 00:05:09.098 18:00:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:09.098 18:00:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:09.098 18:00:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:09.098 00:05:09.098 real 0m0.029s 00:05:09.098 user 0m0.014s 00:05:09.098 sys 0m0.013s 00:05:09.098 18:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.098 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 ************************************ 00:05:09.098 END TEST accel_wrong_workload 00:05:09.098 ************************************ 00:05:09.098 18:00:06 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:09.098 18:00:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:09.098 18:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.098 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 ************************************ 00:05:09.098 START TEST accel_negative_buffers 00:05:09.098 ************************************ 00:05:09.098 18:00:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:09.098 18:00:06 -- common/autotest_common.sh@640 -- # local es=0 00:05:09.098 18:00:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:09.098 18:00:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:09.098 18:00:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.098 18:00:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:09.098 18:00:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:09.098 18:00:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.098 18:00:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.098 18:00:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.098 18:00:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.098 18:00:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.098 18:00:06 -- accel/accel.sh@42 -- # jq -r . 00:05:09.098 -x option must be non-negative. 00:05:09.098 [2024-04-25 18:00:06.984717] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:09.098 accel_perf options: 00:05:09.098 [-h help message] 00:05:09.098 [-q queue depth per core] 00:05:09.098 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:09.098 [-T number of threads per core 00:05:09.098 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:09.098 [-t time in seconds] 00:05:09.098 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:09.098 [ dif_verify, , dif_generate, dif_generate_copy 00:05:09.098 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:09.098 [-l for compress/decompress workloads, name of uncompressed input file 00:05:09.098 [-S for crc32c workload, use this seed value (default 0) 00:05:09.098 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:09.098 [-f for fill workload, use this BYTE value (default 255) 00:05:09.098 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:09.098 [-y verify result if this switch is on] 00:05:09.098 [-a tasks to allocate per core (default: same value as -q)] 00:05:09.098 Can be used to spread operations across a wider range of memory. 00:05:09.098 18:00:06 -- common/autotest_common.sh@643 -- # es=1 00:05:09.098 18:00:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:09.098 18:00:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:09.098 18:00:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:09.098 00:05:09.098 real 0m0.031s 00:05:09.098 user 0m0.019s 00:05:09.098 sys 0m0.011s 00:05:09.098 ************************************ 00:05:09.098 END TEST accel_negative_buffers 00:05:09.098 ************************************ 00:05:09.098 18:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.098 18:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 18:00:07 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:09.098 18:00:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:09.098 18:00:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.098 18:00:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.356 ************************************ 00:05:09.356 START TEST accel_crc32c 00:05:09.356 ************************************ 00:05:09.356 18:00:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:09.356 18:00:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.356 18:00:07 -- accel/accel.sh@17 -- # local accel_module 00:05:09.356 18:00:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:09.356 18:00:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:09.356 18:00:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.356 18:00:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.356 18:00:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.356 18:00:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.356 18:00:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.356 18:00:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.356 18:00:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.356 18:00:07 -- accel/accel.sh@42 -- # jq -r . 00:05:09.356 [2024-04-25 18:00:07.060250] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:09.356 [2024-04-25 18:00:07.060400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58422 ] 00:05:09.356 [2024-04-25 18:00:07.199063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.614 [2024-04-25 18:00:07.302955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.014 18:00:08 -- accel/accel.sh@18 -- # out=' 00:05:11.014 SPDK Configuration: 00:05:11.014 Core mask: 0x1 00:05:11.014 00:05:11.014 Accel Perf Configuration: 00:05:11.014 Workload Type: crc32c 00:05:11.014 CRC-32C seed: 32 00:05:11.014 Transfer size: 4096 bytes 00:05:11.014 Vector count 1 00:05:11.014 Module: software 00:05:11.014 Queue depth: 32 00:05:11.014 Allocate depth: 32 00:05:11.014 # threads/core: 1 00:05:11.014 Run time: 1 seconds 00:05:11.014 Verify: Yes 00:05:11.014 00:05:11.014 Running for 1 seconds... 00:05:11.014 00:05:11.014 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:11.014 ------------------------------------------------------------------------------------ 00:05:11.014 0,0 471936/s 1843 MiB/s 0 0 00:05:11.014 ==================================================================================== 00:05:11.014 Total 471936/s 1843 MiB/s 0 0' 00:05:11.014 18:00:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:11.014 18:00:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.014 18:00:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.014 18:00:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.014 18:00:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.014 18:00:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.014 18:00:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.014 18:00:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.014 18:00:08 -- accel/accel.sh@42 -- # jq -r . 00:05:11.014 [2024-04-25 18:00:08.575363] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:11.014 [2024-04-25 18:00:08.575452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58441 ] 00:05:11.014 [2024-04-25 18:00:08.714502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.014 [2024-04-25 18:00:08.794221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=0x1 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=crc32c 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=32 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=software 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@23 -- # accel_module=software 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=32 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=32 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=1 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val=Yes 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:11.014 18:00:08 -- accel/accel.sh@21 -- # val= 00:05:11.014 18:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:11.014 18:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:12.389 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.389 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.389 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.389 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.389 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.389 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.389 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.389 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.389 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.389 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.390 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.390 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.390 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.390 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.390 18:00:10 -- accel/accel.sh@21 -- # val= 00:05:12.390 18:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:12.390 18:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:12.390 18:00:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:12.390 18:00:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:12.390 18:00:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.390 00:05:12.390 real 0m3.006s 00:05:12.390 user 0m2.584s 00:05:12.390 sys 0m0.217s 00:05:12.390 18:00:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.390 18:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 ************************************ 00:05:12.390 END TEST accel_crc32c 00:05:12.390 ************************************ 00:05:12.390 18:00:10 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:12.390 18:00:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:12.390 18:00:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.390 18:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 ************************************ 00:05:12.390 START TEST accel_crc32c_C2 00:05:12.390 ************************************ 00:05:12.390 18:00:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:12.390 18:00:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.390 18:00:10 -- accel/accel.sh@17 -- # local accel_module 00:05:12.390 18:00:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:12.390 18:00:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:12.390 18:00:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.390 18:00:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.390 18:00:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.390 18:00:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.390 18:00:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.390 18:00:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.390 18:00:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.390 18:00:10 -- accel/accel.sh@42 -- # jq -r . 00:05:12.390 [2024-04-25 18:00:10.126049] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:12.390 [2024-04-25 18:00:10.126141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58476 ] 00:05:12.390 [2024-04-25 18:00:10.265717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.647 [2024-04-25 18:00:10.373865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.023 18:00:11 -- accel/accel.sh@18 -- # out=' 00:05:14.023 SPDK Configuration: 00:05:14.023 Core mask: 0x1 00:05:14.023 00:05:14.023 Accel Perf Configuration: 00:05:14.023 Workload Type: crc32c 00:05:14.023 CRC-32C seed: 0 00:05:14.023 Transfer size: 4096 bytes 00:05:14.023 Vector count 2 00:05:14.023 Module: software 00:05:14.023 Queue depth: 32 00:05:14.023 Allocate depth: 32 00:05:14.023 # threads/core: 1 00:05:14.023 Run time: 1 seconds 00:05:14.023 Verify: Yes 00:05:14.023 00:05:14.023 Running for 1 seconds... 00:05:14.023 00:05:14.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:14.023 ------------------------------------------------------------------------------------ 00:05:14.023 0,0 352928/s 2757 MiB/s 0 0 00:05:14.023 ==================================================================================== 00:05:14.023 Total 352928/s 1378 MiB/s 0 0' 00:05:14.023 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.023 18:00:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:14.023 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.023 18:00:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:14.023 18:00:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.023 18:00:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.023 18:00:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.023 18:00:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.023 18:00:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.023 18:00:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.023 18:00:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.023 18:00:11 -- accel/accel.sh@42 -- # jq -r . 00:05:14.023 [2024-04-25 18:00:11.666177] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:14.023 [2024-04-25 18:00:11.666296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58495 ] 00:05:14.023 [2024-04-25 18:00:11.802988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.023 [2024-04-25 18:00:11.899960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val=0x1 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val=crc32c 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.281 18:00:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.281 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.281 18:00:11 -- accel/accel.sh@21 -- # val=0 00:05:14.281 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val=software 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val=32 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val=32 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val=1 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val=Yes 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:14.282 18:00:11 -- accel/accel.sh@21 -- # val= 00:05:14.282 18:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:14.282 18:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@21 -- # val= 00:05:15.656 18:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:15.656 18:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:15.656 18:00:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:15.656 18:00:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:15.656 18:00:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.656 00:05:15.656 real 0m3.068s 00:05:15.656 user 0m2.623s 00:05:15.656 sys 0m0.242s 00:05:15.656 18:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.656 18:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.656 ************************************ 00:05:15.656 END TEST accel_crc32c_C2 00:05:15.656 ************************************ 00:05:15.656 18:00:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:15.656 18:00:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:15.656 18:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.656 18:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.656 ************************************ 00:05:15.656 START TEST accel_copy 00:05:15.656 ************************************ 00:05:15.656 18:00:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:15.656 18:00:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.656 18:00:13 -- accel/accel.sh@17 -- # local accel_module 00:05:15.656 18:00:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:15.656 18:00:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:15.656 18:00:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.656 18:00:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.656 18:00:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.656 18:00:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.656 18:00:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.656 18:00:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.656 18:00:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.656 18:00:13 -- accel/accel.sh@42 -- # jq -r . 00:05:15.656 [2024-04-25 18:00:13.246949] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:15.656 [2024-04-25 18:00:13.247059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58530 ] 00:05:15.656 [2024-04-25 18:00:13.386700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.656 [2024-04-25 18:00:13.509465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.033 18:00:14 -- accel/accel.sh@18 -- # out=' 00:05:17.033 SPDK Configuration: 00:05:17.033 Core mask: 0x1 00:05:17.033 00:05:17.033 Accel Perf Configuration: 00:05:17.033 Workload Type: copy 00:05:17.033 Transfer size: 4096 bytes 00:05:17.033 Vector count 1 00:05:17.033 Module: software 00:05:17.033 Queue depth: 32 00:05:17.033 Allocate depth: 32 00:05:17.033 # threads/core: 1 00:05:17.033 Run time: 1 seconds 00:05:17.033 Verify: Yes 00:05:17.033 00:05:17.033 Running for 1 seconds... 00:05:17.033 00:05:17.033 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:17.033 ------------------------------------------------------------------------------------ 00:05:17.033 0,0 342624/s 1338 MiB/s 0 0 00:05:17.033 ==================================================================================== 00:05:17.033 Total 342624/s 1338 MiB/s 0 0' 00:05:17.033 18:00:14 -- accel/accel.sh@20 -- # IFS=: 00:05:17.033 18:00:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:17.033 18:00:14 -- accel/accel.sh@20 -- # read -r var val 00:05:17.033 18:00:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:17.033 18:00:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.033 18:00:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.033 18:00:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.034 18:00:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.034 18:00:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.034 18:00:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.034 18:00:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.034 18:00:14 -- accel/accel.sh@42 -- # jq -r . 00:05:17.034 [2024-04-25 18:00:14.784877] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:17.034 [2024-04-25 18:00:14.785713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58549 ] 00:05:17.034 [2024-04-25 18:00:14.921357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.297 [2024-04-25 18:00:15.011464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=0x1 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=copy 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=software 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=32 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=32 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=1 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val=Yes 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:17.297 18:00:15 -- accel/accel.sh@21 -- # val= 00:05:17.297 18:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:17.297 18:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@21 -- # val= 00:05:18.678 18:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:18.678 18:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:18.678 18:00:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:18.678 18:00:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:18.678 18:00:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.678 00:05:18.678 real 0m3.042s 00:05:18.678 user 0m2.615s 00:05:18.678 sys 0m0.221s 00:05:18.678 ************************************ 00:05:18.678 END TEST accel_copy 00:05:18.678 ************************************ 00:05:18.678 18:00:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.678 18:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.678 18:00:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.678 18:00:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:18.678 18:00:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.678 18:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.678 ************************************ 00:05:18.678 START TEST accel_fill 00:05:18.678 ************************************ 00:05:18.678 18:00:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.678 18:00:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.678 18:00:16 -- accel/accel.sh@17 -- # local accel_module 00:05:18.678 18:00:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.678 18:00:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.678 18:00:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.678 18:00:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.678 18:00:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.678 18:00:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.678 18:00:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.678 18:00:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.678 18:00:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.678 18:00:16 -- accel/accel.sh@42 -- # jq -r . 00:05:18.678 [2024-04-25 18:00:16.340680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:18.678 [2024-04-25 18:00:16.340804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:05:18.678 [2024-04-25 18:00:16.475868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.678 [2024-04-25 18:00:16.568926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.064 18:00:17 -- accel/accel.sh@18 -- # out=' 00:05:20.064 SPDK Configuration: 00:05:20.064 Core mask: 0x1 00:05:20.064 00:05:20.064 Accel Perf Configuration: 00:05:20.064 Workload Type: fill 00:05:20.064 Fill pattern: 0x80 00:05:20.064 Transfer size: 4096 bytes 00:05:20.064 Vector count 1 00:05:20.064 Module: software 00:05:20.064 Queue depth: 64 00:05:20.064 Allocate depth: 64 00:05:20.064 # threads/core: 1 00:05:20.064 Run time: 1 seconds 00:05:20.064 Verify: Yes 00:05:20.064 00:05:20.064 Running for 1 seconds... 00:05:20.064 00:05:20.064 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:20.064 ------------------------------------------------------------------------------------ 00:05:20.064 0,0 471424/s 1841 MiB/s 0 0 00:05:20.064 ==================================================================================== 00:05:20.064 Total 471424/s 1841 MiB/s 0 0' 00:05:20.064 18:00:17 -- accel/accel.sh@20 -- # IFS=: 00:05:20.064 18:00:17 -- accel/accel.sh@20 -- # read -r var val 00:05:20.064 18:00:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:20.064 18:00:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:20.064 18:00:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.064 18:00:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.064 18:00:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.064 18:00:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.064 18:00:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.064 18:00:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.064 18:00:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.064 18:00:17 -- accel/accel.sh@42 -- # jq -r . 00:05:20.064 [2024-04-25 18:00:17.842166] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:20.064 [2024-04-25 18:00:17.842261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58603 ] 00:05:20.064 [2024-04-25 18:00:17.976550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.323 [2024-04-25 18:00:18.049254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=0x1 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=fill 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=0x80 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=software 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@23 -- # accel_module=software 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=64 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=64 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=1 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val=Yes 00:05:20.323 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.323 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.323 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.324 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.324 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.324 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:20.324 18:00:18 -- accel/accel.sh@21 -- # val= 00:05:20.324 18:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.324 18:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:20.324 18:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:21.711 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.711 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.711 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.711 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.711 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.711 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.711 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.711 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.711 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.712 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.712 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.712 18:00:19 -- accel/accel.sh@21 -- # val= 00:05:21.712 18:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:21.712 18:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:21.712 18:00:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:21.712 18:00:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:21.712 18:00:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.712 00:05:21.712 real 0m2.972s 00:05:21.712 user 0m2.542s 00:05:21.712 sys 0m0.229s 00:05:21.712 18:00:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.712 18:00:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.712 ************************************ 00:05:21.712 END TEST accel_fill 00:05:21.712 ************************************ 00:05:21.712 18:00:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:21.712 18:00:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:21.712 18:00:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.712 18:00:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.712 ************************************ 00:05:21.712 START TEST accel_copy_crc32c 00:05:21.712 ************************************ 00:05:21.712 18:00:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:21.712 18:00:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.712 18:00:19 -- accel/accel.sh@17 -- # local accel_module 00:05:21.712 18:00:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:21.712 18:00:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:21.712 18:00:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.712 18:00:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.712 18:00:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.712 18:00:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.712 18:00:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.712 18:00:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.712 18:00:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.712 18:00:19 -- accel/accel.sh@42 -- # jq -r . 00:05:21.712 [2024-04-25 18:00:19.359772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:21.712 [2024-04-25 18:00:19.359872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58638 ] 00:05:21.712 [2024-04-25 18:00:19.496442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.712 [2024-04-25 18:00:19.600847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.094 18:00:20 -- accel/accel.sh@18 -- # out=' 00:05:23.094 SPDK Configuration: 00:05:23.094 Core mask: 0x1 00:05:23.094 00:05:23.094 Accel Perf Configuration: 00:05:23.094 Workload Type: copy_crc32c 00:05:23.094 CRC-32C seed: 0 00:05:23.094 Vector size: 4096 bytes 00:05:23.094 Transfer size: 4096 bytes 00:05:23.094 Vector count 1 00:05:23.094 Module: software 00:05:23.094 Queue depth: 32 00:05:23.094 Allocate depth: 32 00:05:23.094 # threads/core: 1 00:05:23.094 Run time: 1 seconds 00:05:23.094 Verify: Yes 00:05:23.094 00:05:23.094 Running for 1 seconds... 00:05:23.094 00:05:23.094 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:23.094 ------------------------------------------------------------------------------------ 00:05:23.094 0,0 271808/s 1061 MiB/s 0 0 00:05:23.094 ==================================================================================== 00:05:23.094 Total 271808/s 1061 MiB/s 0 0' 00:05:23.094 18:00:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:23.094 18:00:20 -- accel/accel.sh@20 -- # IFS=: 00:05:23.094 18:00:20 -- accel/accel.sh@20 -- # read -r var val 00:05:23.094 18:00:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:23.094 18:00:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.094 18:00:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.094 18:00:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.094 18:00:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.094 18:00:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.094 18:00:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.094 18:00:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.094 18:00:20 -- accel/accel.sh@42 -- # jq -r . 00:05:23.094 [2024-04-25 18:00:20.872624] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:23.094 [2024-04-25 18:00:20.872742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:05:23.094 [2024-04-25 18:00:21.004953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.353 [2024-04-25 18:00:21.120101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=0x1 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=0 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=software 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=32 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=32 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.353 18:00:21 -- accel/accel.sh@21 -- # val=1 00:05:23.353 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.353 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.354 18:00:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.354 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.354 18:00:21 -- accel/accel.sh@21 -- # val=Yes 00:05:23.354 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.354 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.354 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:23.354 18:00:21 -- accel/accel.sh@21 -- # val= 00:05:23.354 18:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:23.354 18:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:24.728 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.728 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.728 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.728 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.728 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.728 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.728 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.728 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.728 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.728 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.728 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.729 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.729 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.729 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.729 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.729 18:00:22 -- accel/accel.sh@21 -- # val= 00:05:24.729 18:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:24.729 18:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:24.729 18:00:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:24.729 18:00:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:24.729 18:00:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.729 00:05:24.729 real 0m3.024s 00:05:24.729 user 0m2.600s 00:05:24.729 sys 0m0.224s 00:05:24.729 18:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.729 18:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.729 ************************************ 00:05:24.729 END TEST accel_copy_crc32c 00:05:24.729 ************************************ 00:05:24.729 18:00:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:24.729 18:00:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:24.729 18:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.729 18:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.729 ************************************ 00:05:24.729 START TEST accel_copy_crc32c_C2 00:05:24.729 ************************************ 00:05:24.729 18:00:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:24.729 18:00:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.729 18:00:22 -- accel/accel.sh@17 -- # local accel_module 00:05:24.729 18:00:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:24.729 18:00:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:24.729 18:00:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.729 18:00:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.729 18:00:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.729 18:00:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.729 18:00:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.729 18:00:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.729 18:00:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.729 18:00:22 -- accel/accel.sh@42 -- # jq -r . 00:05:24.729 [2024-04-25 18:00:22.443924] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:24.729 [2024-04-25 18:00:22.444022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58692 ] 00:05:24.729 [2024-04-25 18:00:22.584510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.987 [2024-04-25 18:00:22.748908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.364 18:00:23 -- accel/accel.sh@18 -- # out=' 00:05:26.364 SPDK Configuration: 00:05:26.364 Core mask: 0x1 00:05:26.364 00:05:26.364 Accel Perf Configuration: 00:05:26.364 Workload Type: copy_crc32c 00:05:26.364 CRC-32C seed: 0 00:05:26.364 Vector size: 4096 bytes 00:05:26.364 Transfer size: 8192 bytes 00:05:26.364 Vector count 2 00:05:26.364 Module: software 00:05:26.364 Queue depth: 32 00:05:26.364 Allocate depth: 32 00:05:26.364 # threads/core: 1 00:05:26.364 Run time: 1 seconds 00:05:26.364 Verify: Yes 00:05:26.364 00:05:26.364 Running for 1 seconds... 00:05:26.364 00:05:26.364 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.364 ------------------------------------------------------------------------------------ 00:05:26.364 0,0 193248/s 1509 MiB/s 0 0 00:05:26.364 ==================================================================================== 00:05:26.364 Total 193248/s 754 MiB/s 0 0' 00:05:26.364 18:00:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:26.364 18:00:23 -- accel/accel.sh@20 -- # IFS=: 00:05:26.364 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.364 18:00:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:26.364 18:00:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.364 18:00:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.364 18:00:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.364 18:00:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.364 18:00:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.364 18:00:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.364 18:00:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.364 18:00:24 -- accel/accel.sh@42 -- # jq -r . 00:05:26.364 [2024-04-25 18:00:24.022295] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:26.364 [2024-04-25 18:00:24.022383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58717 ] 00:05:26.364 [2024-04-25 18:00:24.156872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.364 [2024-04-25 18:00:24.275225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=0x1 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=0 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=software 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=32 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=32 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=1 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val=Yes 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:26.636 18:00:24 -- accel/accel.sh@21 -- # val= 00:05:26.636 18:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:26.636 18:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:27.587 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.587 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.587 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.587 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.587 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.587 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.587 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.587 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.587 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.845 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.845 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.845 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.845 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.845 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.845 18:00:25 -- accel/accel.sh@21 -- # val= 00:05:27.845 18:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.845 18:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:27.845 18:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:27.845 18:00:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:27.845 18:00:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:27.845 18:00:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.846 00:05:27.846 real 0m3.105s 00:05:27.846 user 0m2.664s 00:05:27.846 sys 0m0.238s 00:05:27.846 18:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.846 18:00:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.846 ************************************ 00:05:27.846 END TEST accel_copy_crc32c_C2 00:05:27.846 ************************************ 00:05:27.846 18:00:25 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:27.846 18:00:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:27.846 18:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.846 18:00:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.846 ************************************ 00:05:27.846 START TEST accel_dualcast 00:05:27.846 ************************************ 00:05:27.846 18:00:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:27.846 18:00:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.846 18:00:25 -- accel/accel.sh@17 -- # local accel_module 00:05:27.846 18:00:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:27.846 18:00:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:27.846 18:00:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.846 18:00:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.846 18:00:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.846 18:00:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.846 18:00:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.846 18:00:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.846 18:00:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.846 18:00:25 -- accel/accel.sh@42 -- # jq -r . 00:05:27.846 [2024-04-25 18:00:25.596634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.846 [2024-04-25 18:00:25.596757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:05:27.846 [2024-04-25 18:00:25.741759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.104 [2024-04-25 18:00:25.819652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.491 18:00:27 -- accel/accel.sh@18 -- # out=' 00:05:29.491 SPDK Configuration: 00:05:29.491 Core mask: 0x1 00:05:29.491 00:05:29.491 Accel Perf Configuration: 00:05:29.491 Workload Type: dualcast 00:05:29.491 Transfer size: 4096 bytes 00:05:29.491 Vector count 1 00:05:29.491 Module: software 00:05:29.491 Queue depth: 32 00:05:29.491 Allocate depth: 32 00:05:29.491 # threads/core: 1 00:05:29.491 Run time: 1 seconds 00:05:29.491 Verify: Yes 00:05:29.491 00:05:29.491 Running for 1 seconds... 00:05:29.491 00:05:29.491 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.491 ------------------------------------------------------------------------------------ 00:05:29.491 0,0 371552/s 1451 MiB/s 0 0 00:05:29.491 ==================================================================================== 00:05:29.491 Total 371552/s 1451 MiB/s 0 0' 00:05:29.491 18:00:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:29.491 18:00:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.491 18:00:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.491 18:00:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.491 18:00:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.491 18:00:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.491 18:00:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.491 18:00:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.491 18:00:27 -- accel/accel.sh@42 -- # jq -r . 00:05:29.491 [2024-04-25 18:00:27.089870] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:29.491 [2024-04-25 18:00:27.089953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:05:29.491 [2024-04-25 18:00:27.226448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.491 [2024-04-25 18:00:27.339332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=0x1 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=dualcast 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=software 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=32 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=32 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=1 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val=Yes 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:29.491 18:00:27 -- accel/accel.sh@21 -- # val= 00:05:29.491 18:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:29.491 18:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@21 -- # val= 00:05:30.873 ************************************ 00:05:30.873 END TEST accel_dualcast 00:05:30.873 ************************************ 00:05:30.873 18:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:30.873 18:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:30.873 18:00:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:30.873 18:00:28 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:30.873 18:00:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.873 00:05:30.873 real 0m3.020s 00:05:30.873 user 0m2.594s 00:05:30.873 sys 0m0.223s 00:05:30.873 18:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.873 18:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.873 18:00:28 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:30.873 18:00:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:30.873 18:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.873 18:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.873 ************************************ 00:05:30.873 START TEST accel_compare 00:05:30.873 ************************************ 00:05:30.873 18:00:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:30.873 18:00:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.873 18:00:28 -- accel/accel.sh@17 -- # local accel_module 00:05:30.873 18:00:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:30.873 18:00:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.873 18:00:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:30.873 18:00:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.873 18:00:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.873 18:00:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.873 18:00:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.873 18:00:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.873 18:00:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.873 18:00:28 -- accel/accel.sh@42 -- # jq -r . 00:05:30.873 [2024-04-25 18:00:28.671801] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:30.873 [2024-04-25 18:00:28.671910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58800 ] 00:05:31.132 [2024-04-25 18:00:28.811410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.132 [2024-04-25 18:00:28.909931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.507 18:00:30 -- accel/accel.sh@18 -- # out=' 00:05:32.507 SPDK Configuration: 00:05:32.507 Core mask: 0x1 00:05:32.507 00:05:32.507 Accel Perf Configuration: 00:05:32.507 Workload Type: compare 00:05:32.507 Transfer size: 4096 bytes 00:05:32.507 Vector count 1 00:05:32.507 Module: software 00:05:32.507 Queue depth: 32 00:05:32.507 Allocate depth: 32 00:05:32.507 # threads/core: 1 00:05:32.507 Run time: 1 seconds 00:05:32.507 Verify: Yes 00:05:32.507 00:05:32.507 Running for 1 seconds... 00:05:32.507 00:05:32.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.507 ------------------------------------------------------------------------------------ 00:05:32.507 0,0 484064/s 1890 MiB/s 0 0 00:05:32.507 ==================================================================================== 00:05:32.507 Total 484064/s 1890 MiB/s 0 0' 00:05:32.507 18:00:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:32.507 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.507 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.507 18:00:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:32.507 18:00:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.507 18:00:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.507 18:00:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.507 18:00:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.507 18:00:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.507 18:00:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.507 18:00:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.507 18:00:30 -- accel/accel.sh@42 -- # jq -r . 00:05:32.507 [2024-04-25 18:00:30.180917] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:32.507 [2024-04-25 18:00:30.181022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ] 00:05:32.507 [2024-04-25 18:00:30.310516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.507 [2024-04-25 18:00:30.408980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val=0x1 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val=compare 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.776 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.776 18:00:30 -- accel/accel.sh@21 -- # val=software 00:05:32.776 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val=32 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val=32 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val=1 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val=Yes 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:32.777 18:00:30 -- accel/accel.sh@21 -- # val= 00:05:32.777 18:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:32.777 18:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 ************************************ 00:05:33.725 END TEST accel_compare 00:05:33.725 ************************************ 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@21 -- # val= 00:05:33.725 18:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:33.725 18:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:33.725 18:00:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:33.725 18:00:31 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:33.725 18:00:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.725 00:05:33.725 real 0m3.005s 00:05:33.725 user 0m2.581s 00:05:33.725 sys 0m0.221s 00:05:33.725 18:00:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.725 18:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.983 18:00:31 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:33.983 18:00:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:33.983 18:00:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.983 18:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.983 ************************************ 00:05:33.983 START TEST accel_xor 00:05:33.983 ************************************ 00:05:33.983 18:00:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:33.983 18:00:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.983 18:00:31 -- accel/accel.sh@17 -- # local accel_module 00:05:33.983 18:00:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:33.983 18:00:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:33.983 18:00:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.983 18:00:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.983 18:00:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.983 18:00:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.983 18:00:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.983 18:00:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.983 18:00:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.983 18:00:31 -- accel/accel.sh@42 -- # jq -r . 00:05:33.983 [2024-04-25 18:00:31.724098] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.983 [2024-04-25 18:00:31.724186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58854 ] 00:05:33.983 [2024-04-25 18:00:31.855196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.242 [2024-04-25 18:00:31.973237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.619 18:00:33 -- accel/accel.sh@18 -- # out=' 00:05:35.619 SPDK Configuration: 00:05:35.619 Core mask: 0x1 00:05:35.619 00:05:35.619 Accel Perf Configuration: 00:05:35.619 Workload Type: xor 00:05:35.619 Source buffers: 2 00:05:35.619 Transfer size: 4096 bytes 00:05:35.619 Vector count 1 00:05:35.619 Module: software 00:05:35.619 Queue depth: 32 00:05:35.619 Allocate depth: 32 00:05:35.619 # threads/core: 1 00:05:35.619 Run time: 1 seconds 00:05:35.619 Verify: Yes 00:05:35.619 00:05:35.619 Running for 1 seconds... 00:05:35.619 00:05:35.619 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:35.619 ------------------------------------------------------------------------------------ 00:05:35.619 0,0 249984/s 976 MiB/s 0 0 00:05:35.619 ==================================================================================== 00:05:35.619 Total 249984/s 976 MiB/s 0 0' 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:35.619 18:00:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.619 18:00:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.619 18:00:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.619 18:00:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.619 18:00:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.619 18:00:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.619 18:00:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.619 18:00:33 -- accel/accel.sh@42 -- # jq -r . 00:05:35.619 [2024-04-25 18:00:33.243157] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:35.619 [2024-04-25 18:00:33.243251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:05:35.619 [2024-04-25 18:00:33.373759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.619 [2024-04-25 18:00:33.473970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=0x1 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=xor 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=2 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=software 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=32 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.619 18:00:33 -- accel/accel.sh@21 -- # val=32 00:05:35.619 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.619 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.620 18:00:33 -- accel/accel.sh@21 -- # val=1 00:05:35.620 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.620 18:00:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:35.620 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.620 18:00:33 -- accel/accel.sh@21 -- # val=Yes 00:05:35.620 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.620 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.620 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:35.620 18:00:33 -- accel/accel.sh@21 -- # val= 00:05:35.620 18:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:35.620 18:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@21 -- # val= 00:05:36.995 18:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:36.995 18:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:36.995 18:00:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.995 18:00:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:36.995 18:00:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.995 00:05:36.995 real 0m3.031s 00:05:36.995 user 0m2.600s 00:05:36.995 sys 0m0.224s 00:05:36.995 18:00:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.995 18:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.995 ************************************ 00:05:36.995 END TEST accel_xor 00:05:36.995 ************************************ 00:05:36.995 18:00:34 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:36.995 18:00:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:36.995 18:00:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.995 18:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.995 ************************************ 00:05:36.995 START TEST accel_xor 00:05:36.995 ************************************ 00:05:36.995 18:00:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:36.995 18:00:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.995 18:00:34 -- accel/accel.sh@17 -- # local accel_module 00:05:36.995 18:00:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:36.995 18:00:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:36.995 18:00:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.995 18:00:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.995 18:00:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.995 18:00:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.995 18:00:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.995 18:00:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.995 18:00:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.995 18:00:34 -- accel/accel.sh@42 -- # jq -r . 00:05:36.995 [2024-04-25 18:00:34.807829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.995 [2024-04-25 18:00:34.807934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:05:37.255 [2024-04-25 18:00:34.944411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.255 [2024-04-25 18:00:35.060071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.640 18:00:36 -- accel/accel.sh@18 -- # out=' 00:05:38.640 SPDK Configuration: 00:05:38.640 Core mask: 0x1 00:05:38.640 00:05:38.640 Accel Perf Configuration: 00:05:38.640 Workload Type: xor 00:05:38.640 Source buffers: 3 00:05:38.640 Transfer size: 4096 bytes 00:05:38.640 Vector count 1 00:05:38.640 Module: software 00:05:38.640 Queue depth: 32 00:05:38.640 Allocate depth: 32 00:05:38.640 # threads/core: 1 00:05:38.640 Run time: 1 seconds 00:05:38.640 Verify: Yes 00:05:38.640 00:05:38.640 Running for 1 seconds... 00:05:38.640 00:05:38.640 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:38.640 ------------------------------------------------------------------------------------ 00:05:38.640 0,0 229056/s 894 MiB/s 0 0 00:05:38.640 ==================================================================================== 00:05:38.640 Total 229056/s 894 MiB/s 0 0' 00:05:38.640 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.640 18:00:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:38.640 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.640 18:00:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:38.640 18:00:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.640 18:00:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.640 18:00:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.640 18:00:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.640 18:00:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.640 18:00:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.640 18:00:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.640 18:00:36 -- accel/accel.sh@42 -- # jq -r . 00:05:38.640 [2024-04-25 18:00:36.345579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:38.640 [2024-04-25 18:00:36.345679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:05:38.640 [2024-04-25 18:00:36.483003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.898 [2024-04-25 18:00:36.597101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val=0x1 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val=xor 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val=3 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.898 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.898 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.898 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val=software 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val=32 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val=32 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val=1 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val=Yes 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:38.899 18:00:36 -- accel/accel.sh@21 -- # val= 00:05:38.899 18:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:38.899 18:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@21 -- # val= 00:05:40.291 18:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.291 18:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.291 18:00:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.291 ************************************ 00:05:40.291 END TEST accel_xor 00:05:40.291 ************************************ 00:05:40.291 18:00:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:40.291 18:00:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.291 00:05:40.291 real 0m3.072s 00:05:40.291 user 0m2.633s 00:05:40.291 sys 0m0.231s 00:05:40.291 18:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.291 18:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.291 18:00:37 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:40.291 18:00:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:40.291 18:00:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.291 18:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.291 ************************************ 00:05:40.291 START TEST accel_dif_verify 00:05:40.291 ************************************ 00:05:40.291 18:00:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:40.291 18:00:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.291 18:00:37 -- accel/accel.sh@17 -- # local accel_module 00:05:40.291 18:00:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:40.291 18:00:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:40.291 18:00:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.291 18:00:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.291 18:00:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.291 18:00:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.291 18:00:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.291 18:00:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.291 18:00:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.291 18:00:37 -- accel/accel.sh@42 -- # jq -r . 00:05:40.291 [2024-04-25 18:00:37.928904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:40.291 [2024-04-25 18:00:37.929041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:05:40.291 [2024-04-25 18:00:38.067843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.291 [2024-04-25 18:00:38.188379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.669 18:00:39 -- accel/accel.sh@18 -- # out=' 00:05:41.669 SPDK Configuration: 00:05:41.669 Core mask: 0x1 00:05:41.669 00:05:41.669 Accel Perf Configuration: 00:05:41.669 Workload Type: dif_verify 00:05:41.669 Vector size: 4096 bytes 00:05:41.669 Transfer size: 4096 bytes 00:05:41.669 Block size: 512 bytes 00:05:41.669 Metadata size: 8 bytes 00:05:41.669 Vector count 1 00:05:41.669 Module: software 00:05:41.669 Queue depth: 32 00:05:41.669 Allocate depth: 32 00:05:41.669 # threads/core: 1 00:05:41.669 Run time: 1 seconds 00:05:41.669 Verify: No 00:05:41.669 00:05:41.669 Running for 1 seconds... 00:05:41.669 00:05:41.669 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:41.669 ------------------------------------------------------------------------------------ 00:05:41.669 0,0 99296/s 393 MiB/s 0 0 00:05:41.669 ==================================================================================== 00:05:41.669 Total 99296/s 387 MiB/s 0 0' 00:05:41.669 18:00:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:41.669 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.669 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.669 18:00:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:41.669 18:00:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.669 18:00:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.669 18:00:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.669 18:00:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.669 18:00:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.669 18:00:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.669 18:00:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.669 18:00:39 -- accel/accel.sh@42 -- # jq -r . 00:05:41.669 [2024-04-25 18:00:39.448231] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:41.669 [2024-04-25 18:00:39.448336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:05:41.669 [2024-04-25 18:00:39.580962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.928 [2024-04-25 18:00:39.713173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=0x1 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=dif_verify 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=software 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@23 -- # accel_module=software 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=32 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=32 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=1 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val=No 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:41.928 18:00:39 -- accel/accel.sh@21 -- # val= 00:05:41.928 18:00:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # IFS=: 00:05:41.928 18:00:39 -- accel/accel.sh@20 -- # read -r var val 00:05:43.356 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.356 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.356 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.356 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.356 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.356 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.356 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.356 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.356 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.357 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.357 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.357 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.357 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.357 18:00:40 -- accel/accel.sh@21 -- # val= 00:05:43.357 18:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.357 18:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.357 18:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.357 18:00:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.357 18:00:40 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:43.357 18:00:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.357 00:05:43.357 real 0m3.063s 00:05:43.357 user 0m2.619s 00:05:43.357 sys 0m0.239s 00:05:43.357 18:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.357 18:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:43.357 ************************************ 00:05:43.357 END TEST accel_dif_verify 00:05:43.357 ************************************ 00:05:43.357 18:00:41 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:43.357 18:00:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:43.357 18:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.357 18:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.357 ************************************ 00:05:43.357 START TEST accel_dif_generate 00:05:43.357 ************************************ 00:05:43.357 18:00:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:43.357 18:00:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.357 18:00:41 -- accel/accel.sh@17 -- # local accel_module 00:05:43.357 18:00:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:43.357 18:00:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:43.357 18:00:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.357 18:00:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.357 18:00:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.357 18:00:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.357 18:00:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.357 18:00:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.357 18:00:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.357 18:00:41 -- accel/accel.sh@42 -- # jq -r . 00:05:43.357 [2024-04-25 18:00:41.038342] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.357 [2024-04-25 18:00:41.038428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:05:43.357 [2024-04-25 18:00:41.177114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.357 [2024-04-25 18:00:41.288360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.734 18:00:42 -- accel/accel.sh@18 -- # out=' 00:05:44.734 SPDK Configuration: 00:05:44.734 Core mask: 0x1 00:05:44.734 00:05:44.734 Accel Perf Configuration: 00:05:44.734 Workload Type: dif_generate 00:05:44.734 Vector size: 4096 bytes 00:05:44.734 Transfer size: 4096 bytes 00:05:44.734 Block size: 512 bytes 00:05:44.734 Metadata size: 8 bytes 00:05:44.734 Vector count 1 00:05:44.734 Module: software 00:05:44.734 Queue depth: 32 00:05:44.734 Allocate depth: 32 00:05:44.734 # threads/core: 1 00:05:44.734 Run time: 1 seconds 00:05:44.734 Verify: No 00:05:44.734 00:05:44.734 Running for 1 seconds... 00:05:44.734 00:05:44.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.734 ------------------------------------------------------------------------------------ 00:05:44.734 0,0 131296/s 520 MiB/s 0 0 00:05:44.734 ==================================================================================== 00:05:44.734 Total 131296/s 512 MiB/s 0 0' 00:05:44.734 18:00:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:44.734 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.734 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.734 18:00:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:44.734 18:00:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.734 18:00:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.734 18:00:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.734 18:00:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.734 18:00:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.734 18:00:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.734 18:00:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.734 18:00:42 -- accel/accel.sh@42 -- # jq -r . 00:05:44.734 [2024-04-25 18:00:42.551784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:44.734 [2024-04-25 18:00:42.552413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59036 ] 00:05:44.993 [2024-04-25 18:00:42.688683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.993 [2024-04-25 18:00:42.794055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=0x1 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=dif_generate 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=software 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=32 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=32 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=1 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val=No 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.993 18:00:42 -- accel/accel.sh@21 -- # val= 00:05:44.993 18:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.993 18:00:42 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@21 -- # val= 00:05:46.371 18:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.371 18:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.371 18:00:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.371 18:00:44 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:46.371 18:00:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.371 00:05:46.371 real 0m3.041s 00:05:46.371 user 0m2.610s 00:05:46.371 sys 0m0.232s 00:05:46.371 18:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.371 18:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.371 ************************************ 00:05:46.371 END TEST accel_dif_generate 00:05:46.371 ************************************ 00:05:46.371 18:00:44 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:46.371 18:00:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:46.371 18:00:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.371 18:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.371 ************************************ 00:05:46.371 START TEST accel_dif_generate_copy 00:05:46.371 ************************************ 00:05:46.371 18:00:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:46.371 18:00:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.371 18:00:44 -- accel/accel.sh@17 -- # local accel_module 00:05:46.371 18:00:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:46.371 18:00:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:46.371 18:00:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.371 18:00:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.371 18:00:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.371 18:00:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.371 18:00:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.371 18:00:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.371 18:00:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.371 18:00:44 -- accel/accel.sh@42 -- # jq -r . 00:05:46.371 [2024-04-25 18:00:44.136403] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.371 [2024-04-25 18:00:44.136512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:05:46.371 [2024-04-25 18:00:44.269983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.630 [2024-04-25 18:00:44.392842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.008 18:00:45 -- accel/accel.sh@18 -- # out=' 00:05:48.008 SPDK Configuration: 00:05:48.008 Core mask: 0x1 00:05:48.008 00:05:48.008 Accel Perf Configuration: 00:05:48.008 Workload Type: dif_generate_copy 00:05:48.008 Vector size: 4096 bytes 00:05:48.008 Transfer size: 4096 bytes 00:05:48.008 Vector count 1 00:05:48.008 Module: software 00:05:48.008 Queue depth: 32 00:05:48.008 Allocate depth: 32 00:05:48.008 # threads/core: 1 00:05:48.009 Run time: 1 seconds 00:05:48.009 Verify: No 00:05:48.009 00:05:48.009 Running for 1 seconds... 00:05:48.009 00:05:48.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.009 ------------------------------------------------------------------------------------ 00:05:48.009 0,0 99136/s 393 MiB/s 0 0 00:05:48.009 ==================================================================================== 00:05:48.009 Total 99136/s 387 MiB/s 0 0' 00:05:48.009 18:00:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:48.009 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.009 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.009 18:00:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:48.009 18:00:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.009 18:00:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.009 18:00:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.009 18:00:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.009 18:00:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.009 18:00:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.009 18:00:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.009 18:00:45 -- accel/accel.sh@42 -- # jq -r . 00:05:48.009 [2024-04-25 18:00:45.678937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.009 [2024-04-25 18:00:45.679670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:05:48.009 [2024-04-25 18:00:45.809801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.009 [2024-04-25 18:00:45.925847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.267 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.267 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.267 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.267 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.267 18:00:45 -- accel/accel.sh@21 -- # val=0x1 00:05:48.267 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.267 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.267 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.267 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.267 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=software 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=32 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=32 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=1 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val=No 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.268 18:00:45 -- accel/accel.sh@21 -- # val= 00:05:48.268 18:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.268 18:00:45 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@21 -- # val= 00:05:49.644 18:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:49.644 18:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:49.644 18:00:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.644 18:00:47 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:49.644 18:00:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.644 00:05:49.644 real 0m3.076s 00:05:49.644 user 0m2.640s 00:05:49.644 sys 0m0.233s 00:05:49.644 18:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.644 18:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.644 ************************************ 00:05:49.644 END TEST accel_dif_generate_copy 00:05:49.644 ************************************ 00:05:49.644 18:00:47 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:49.644 18:00:47 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.644 18:00:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:49.644 18:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.644 18:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.644 ************************************ 00:05:49.644 START TEST accel_comp 00:05:49.644 ************************************ 00:05:49.644 18:00:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.644 18:00:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.644 18:00:47 -- accel/accel.sh@17 -- # local accel_module 00:05:49.644 18:00:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.644 18:00:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.644 18:00:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.644 18:00:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.644 18:00:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.644 18:00:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.644 18:00:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.644 18:00:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.644 18:00:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.644 18:00:47 -- accel/accel.sh@42 -- # jq -r . 00:05:49.644 [2024-04-25 18:00:47.261757] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:49.644 [2024-04-25 18:00:47.261834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:05:49.644 [2024-04-25 18:00:47.395985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.644 [2024-04-25 18:00:47.518573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.094 18:00:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:51.095 00:05:51.095 SPDK Configuration: 00:05:51.095 Core mask: 0x1 00:05:51.095 00:05:51.095 Accel Perf Configuration: 00:05:51.095 Workload Type: compress 00:05:51.095 Transfer size: 4096 bytes 00:05:51.095 Vector count 1 00:05:51.095 Module: software 00:05:51.095 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.095 Queue depth: 32 00:05:51.095 Allocate depth: 32 00:05:51.095 # threads/core: 1 00:05:51.095 Run time: 1 seconds 00:05:51.095 Verify: No 00:05:51.095 00:05:51.095 Running for 1 seconds... 00:05:51.095 00:05:51.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.095 ------------------------------------------------------------------------------------ 00:05:51.095 0,0 48992/s 204 MiB/s 0 0 00:05:51.095 ==================================================================================== 00:05:51.095 Total 48992/s 191 MiB/s 0 0' 00:05:51.095 18:00:48 -- accel/accel.sh@20 -- # IFS=: 00:05:51.095 18:00:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.095 18:00:48 -- accel/accel.sh@20 -- # read -r var val 00:05:51.095 18:00:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.095 18:00:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.095 18:00:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.095 18:00:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.095 18:00:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.095 18:00:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.095 18:00:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.095 18:00:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.095 18:00:48 -- accel/accel.sh@42 -- # jq -r . 00:05:51.095 [2024-04-25 18:00:48.848141] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:51.095 [2024-04-25 18:00:48.848252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:05:51.095 [2024-04-25 18:00:48.976892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.353 [2024-04-25 18:00:49.087036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=0x1 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=compress 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=software 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=32 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=32 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val=1 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.353 18:00:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.353 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.353 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.354 18:00:49 -- accel/accel.sh@21 -- # val=No 00:05:51.354 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.354 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.354 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.354 18:00:49 -- accel/accel.sh@21 -- # val= 00:05:51.354 18:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.354 18:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:52.728 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.728 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.729 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.729 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.729 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.729 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@21 -- # val= 00:05:52.729 18:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.729 18:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.729 18:00:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.729 18:00:50 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:52.729 18:00:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.729 00:05:52.729 real 0m3.277s 00:05:52.729 user 0m2.803s 00:05:52.729 sys 0m0.268s 00:05:52.729 18:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.729 18:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.729 ************************************ 00:05:52.729 END TEST accel_comp 00:05:52.729 ************************************ 00:05:52.729 18:00:50 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:52.729 18:00:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:52.729 18:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.729 18:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.729 ************************************ 00:05:52.729 START TEST accel_decomp 00:05:52.729 ************************************ 00:05:52.729 18:00:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:52.729 18:00:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.729 18:00:50 -- accel/accel.sh@17 -- # local accel_module 00:05:52.729 18:00:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:52.729 18:00:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:52.729 18:00:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.729 18:00:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.729 18:00:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.729 18:00:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.729 18:00:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.729 18:00:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.729 18:00:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.729 18:00:50 -- accel/accel.sh@42 -- # jq -r . 00:05:52.729 [2024-04-25 18:00:50.588339] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.729 [2024-04-25 18:00:50.588484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:05:52.987 [2024-04-25 18:00:50.727255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.987 [2024-04-25 18:00:50.891172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.363 18:00:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:54.363 00:05:54.363 SPDK Configuration: 00:05:54.363 Core mask: 0x1 00:05:54.363 00:05:54.363 Accel Perf Configuration: 00:05:54.363 Workload Type: decompress 00:05:54.363 Transfer size: 4096 bytes 00:05:54.363 Vector count 1 00:05:54.363 Module: software 00:05:54.363 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:54.363 Queue depth: 32 00:05:54.363 Allocate depth: 32 00:05:54.363 # threads/core: 1 00:05:54.363 Run time: 1 seconds 00:05:54.363 Verify: Yes 00:05:54.363 00:05:54.363 Running for 1 seconds... 00:05:54.363 00:05:54.363 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.363 ------------------------------------------------------------------------------------ 00:05:54.363 0,0 71776/s 132 MiB/s 0 0 00:05:54.363 ==================================================================================== 00:05:54.363 Total 71776/s 280 MiB/s 0 0' 00:05:54.363 18:00:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:54.363 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.363 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.363 18:00:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:54.363 18:00:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.363 18:00:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.363 18:00:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.363 18:00:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.363 18:00:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.363 18:00:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.363 18:00:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.363 18:00:52 -- accel/accel.sh@42 -- # jq -r . 00:05:54.363 [2024-04-25 18:00:52.250953] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:54.363 [2024-04-25 18:00:52.251044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:05:54.622 [2024-04-25 18:00:52.382165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.622 [2024-04-25 18:00:52.497802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=0x1 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=decompress 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=software 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=32 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=32 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=1 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.881 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.881 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.881 18:00:52 -- accel/accel.sh@21 -- # val=Yes 00:05:54.882 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.882 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.882 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.882 18:00:52 -- accel/accel.sh@21 -- # val= 00:05:54.882 18:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.882 18:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@21 -- # val= 00:05:56.257 18:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.257 18:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.257 18:00:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.257 18:00:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:56.257 18:00:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.257 00:05:56.257 real 0m3.254s 00:05:56.257 user 0m2.732s 00:05:56.257 sys 0m0.310s 00:05:56.257 18:00:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.257 18:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 ************************************ 00:05:56.257 END TEST accel_decomp 00:05:56.257 ************************************ 00:05:56.257 18:00:53 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:56.257 18:00:53 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:56.257 18:00:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.257 18:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.257 ************************************ 00:05:56.257 START TEST accel_decmop_full 00:05:56.257 ************************************ 00:05:56.257 18:00:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:56.257 18:00:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.257 18:00:53 -- accel/accel.sh@17 -- # local accel_module 00:05:56.257 18:00:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:56.257 18:00:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:56.257 18:00:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.257 18:00:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.257 18:00:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.257 18:00:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.257 18:00:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.257 18:00:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.257 18:00:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.257 18:00:53 -- accel/accel.sh@42 -- # jq -r . 00:05:56.257 [2024-04-25 18:00:53.884035] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:56.257 [2024-04-25 18:00:53.884123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59240 ] 00:05:56.257 [2024-04-25 18:00:54.014808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.257 [2024-04-25 18:00:54.105338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.633 18:00:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:57.633 00:05:57.633 SPDK Configuration: 00:05:57.633 Core mask: 0x1 00:05:57.633 00:05:57.633 Accel Perf Configuration: 00:05:57.633 Workload Type: decompress 00:05:57.633 Transfer size: 111250 bytes 00:05:57.633 Vector count 1 00:05:57.633 Module: software 00:05:57.633 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:57.633 Queue depth: 32 00:05:57.633 Allocate depth: 32 00:05:57.633 # threads/core: 1 00:05:57.633 Run time: 1 seconds 00:05:57.633 Verify: Yes 00:05:57.633 00:05:57.633 Running for 1 seconds... 00:05:57.633 00:05:57.633 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.633 ------------------------------------------------------------------------------------ 00:05:57.633 0,0 5088/s 210 MiB/s 0 0 00:05:57.633 ==================================================================================== 00:05:57.633 Total 5088/s 539 MiB/s 0 0' 00:05:57.633 18:00:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:57.633 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:57.633 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:57.633 18:00:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:57.633 18:00:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.633 18:00:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.633 18:00:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.633 18:00:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.633 18:00:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.633 18:00:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.633 18:00:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.633 18:00:55 -- accel/accel.sh@42 -- # jq -r . 00:05:57.633 [2024-04-25 18:00:55.504003] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:57.633 [2024-04-25 18:00:55.504118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59254 ] 00:05:57.891 [2024-04-25 18:00:55.634873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.891 [2024-04-25 18:00:55.748158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=0x1 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=decompress 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=software 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=32 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=32 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=1 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val=Yes 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:58.150 18:00:55 -- accel/accel.sh@21 -- # val= 00:05:58.150 18:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:58.150 18:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@21 -- # val= 00:05:59.536 18:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.536 18:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.536 18:00:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.536 18:00:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:59.536 18:00:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.536 00:05:59.536 real 0m3.275s 00:05:59.536 user 0m2.759s 00:05:59.536 sys 0m0.303s 00:05:59.536 18:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.536 18:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 ************************************ 00:05:59.536 END TEST accel_decmop_full 00:05:59.536 ************************************ 00:05:59.536 18:00:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.536 18:00:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:59.536 18:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.536 18:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.536 ************************************ 00:05:59.536 START TEST accel_decomp_mcore 00:05:59.536 ************************************ 00:05:59.536 18:00:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.536 18:00:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.536 18:00:57 -- accel/accel.sh@17 -- # local accel_module 00:05:59.536 18:00:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.536 18:00:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.536 18:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.536 18:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.536 18:00:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.536 18:00:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.536 18:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.536 18:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.536 18:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.536 18:00:57 -- accel/accel.sh@42 -- # jq -r . 00:05:59.536 [2024-04-25 18:00:57.213570] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:59.536 [2024-04-25 18:00:57.213681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:05:59.536 [2024-04-25 18:00:57.342091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.536 [2024-04-25 18:00:57.461699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.536 [2024-04-25 18:00:57.461850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.536 [2024-04-25 18:00:57.461997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.536 [2024-04-25 18:00:57.462230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.913 18:00:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:00.913 00:06:00.913 SPDK Configuration: 00:06:00.913 Core mask: 0xf 00:06:00.913 00:06:00.913 Accel Perf Configuration: 00:06:00.913 Workload Type: decompress 00:06:00.913 Transfer size: 4096 bytes 00:06:00.913 Vector count 1 00:06:00.913 Module: software 00:06:00.913 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.913 Queue depth: 32 00:06:00.913 Allocate depth: 32 00:06:00.913 # threads/core: 1 00:06:00.913 Run time: 1 seconds 00:06:00.913 Verify: Yes 00:06:00.913 00:06:00.913 Running for 1 seconds... 00:06:00.913 00:06:00.913 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.913 ------------------------------------------------------------------------------------ 00:06:00.913 0,0 54432/s 100 MiB/s 0 0 00:06:00.913 3,0 49600/s 91 MiB/s 0 0 00:06:00.913 2,0 49664/s 91 MiB/s 0 0 00:06:00.913 1,0 47936/s 88 MiB/s 0 0 00:06:00.913 ==================================================================================== 00:06:00.913 Total 201632/s 787 MiB/s 0 0' 00:06:00.913 18:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.913 18:00:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.913 18:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.913 18:00:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:00.913 18:00:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.913 18:00:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.913 18:00:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.913 18:00:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.913 18:00:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.913 18:00:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.913 18:00:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.913 18:00:58 -- accel/accel.sh@42 -- # jq -r . 00:06:00.913 [2024-04-25 18:00:58.838183] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.913 [2024-04-25 18:00:58.838260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:06:01.172 [2024-04-25 18:00:58.971607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.431 [2024-04-25 18:00:59.105413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.431 [2024-04-25 18:00:59.105578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.431 [2024-04-25 18:00:59.105704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.431 [2024-04-25 18:00:59.105716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val=0xf 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val=decompress 00:06:01.431 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.431 18:00:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.431 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.431 18:00:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=software 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=32 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=32 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=1 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val=Yes 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:01.432 18:00:59 -- accel/accel.sh@21 -- # val= 00:06:01.432 18:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:01.432 18:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:02.816 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.816 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.816 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.816 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@21 -- # val= 00:06:02.817 18:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:02.817 18:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:02.817 18:01:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.817 18:01:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:02.817 18:01:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.817 00:06:02.817 real 0m3.271s 00:06:02.817 user 0m9.892s 00:06:02.817 sys 0m0.323s 00:06:02.817 18:01:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.817 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 ************************************ 00:06:02.817 END TEST accel_decomp_mcore 00:06:02.817 ************************************ 00:06:02.817 18:01:00 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.817 18:01:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:02.817 18:01:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.817 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 ************************************ 00:06:02.817 START TEST accel_decomp_full_mcore 00:06:02.817 ************************************ 00:06:02.817 18:01:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.817 18:01:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.817 18:01:00 -- accel/accel.sh@17 -- # local accel_module 00:06:02.817 18:01:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.817 18:01:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.817 18:01:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.817 18:01:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.817 18:01:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.817 18:01:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.817 18:01:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.817 18:01:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.817 18:01:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.817 18:01:00 -- accel/accel.sh@42 -- # jq -r . 00:06:02.817 [2024-04-25 18:01:00.531219] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:02.817 [2024-04-25 18:01:00.531316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:06:02.817 [2024-04-25 18:01:00.663128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.074 [2024-04-25 18:01:00.802835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.074 [2024-04-25 18:01:00.802990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.074 [2024-04-25 18:01:00.803124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.074 [2024-04-25 18:01:00.803473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.449 18:01:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:04.449 00:06:04.449 SPDK Configuration: 00:06:04.449 Core mask: 0xf 00:06:04.449 00:06:04.449 Accel Perf Configuration: 00:06:04.449 Workload Type: decompress 00:06:04.449 Transfer size: 111250 bytes 00:06:04.449 Vector count 1 00:06:04.449 Module: software 00:06:04.449 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.449 Queue depth: 32 00:06:04.449 Allocate depth: 32 00:06:04.449 # threads/core: 1 00:06:04.449 Run time: 1 seconds 00:06:04.449 Verify: Yes 00:06:04.449 00:06:04.449 Running for 1 seconds... 00:06:04.449 00:06:04.449 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.449 ------------------------------------------------------------------------------------ 00:06:04.449 0,0 4896/s 202 MiB/s 0 0 00:06:04.450 3,0 4416/s 182 MiB/s 0 0 00:06:04.450 2,0 4320/s 178 MiB/s 0 0 00:06:04.450 1,0 4288/s 177 MiB/s 0 0 00:06:04.450 ==================================================================================== 00:06:04.450 Total 17920/s 1901 MiB/s 0 0' 00:06:04.450 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.450 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.450 18:01:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.450 18:01:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.450 18:01:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.450 18:01:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.450 18:01:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.450 18:01:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.450 18:01:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.450 18:01:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.450 18:01:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.450 18:01:02 -- accel/accel.sh@42 -- # jq -r . 00:06:04.450 [2024-04-25 18:01:02.199340] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:04.450 [2024-04-25 18:01:02.199446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ] 00:06:04.450 [2024-04-25 18:01:02.336177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.708 [2024-04-25 18:01:02.482358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.708 [2024-04-25 18:01:02.482533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.708 [2024-04-25 18:01:02.482641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.708 [2024-04-25 18:01:02.483003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=0xf 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=decompress 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=software 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=32 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=32 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val=1 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.708 18:01:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.708 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.708 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.709 18:01:02 -- accel/accel.sh@21 -- # val=Yes 00:06:04.709 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.709 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.709 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.709 18:01:02 -- accel/accel.sh@21 -- # val= 00:06:04.709 18:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.709 18:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@21 -- # val= 00:06:06.085 18:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.085 18:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.085 18:01:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.085 18:01:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:06.085 18:01:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.085 00:06:06.085 real 0m3.352s 00:06:06.085 user 0m10.047s 00:06:06.085 sys 0m0.340s 00:06:06.085 18:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.085 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.085 ************************************ 00:06:06.085 END TEST accel_decomp_full_mcore 00:06:06.085 ************************************ 00:06:06.085 18:01:03 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.085 18:01:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:06.085 18:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.085 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.085 ************************************ 00:06:06.085 START TEST accel_decomp_mthread 00:06:06.085 ************************************ 00:06:06.085 18:01:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.085 18:01:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.085 18:01:03 -- accel/accel.sh@17 -- # local accel_module 00:06:06.085 18:01:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.085 18:01:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.085 18:01:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.085 18:01:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.085 18:01:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.085 18:01:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.085 18:01:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.085 18:01:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.085 18:01:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.085 18:01:03 -- accel/accel.sh@42 -- # jq -r . 00:06:06.085 [2024-04-25 18:01:03.930991] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:06.085 [2024-04-25 18:01:03.931356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59421 ] 00:06:06.343 [2024-04-25 18:01:04.065989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.343 [2024-04-25 18:01:04.214174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.718 18:01:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:07.718 00:06:07.718 SPDK Configuration: 00:06:07.718 Core mask: 0x1 00:06:07.718 00:06:07.718 Accel Perf Configuration: 00:06:07.718 Workload Type: decompress 00:06:07.718 Transfer size: 4096 bytes 00:06:07.718 Vector count 1 00:06:07.718 Module: software 00:06:07.718 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.718 Queue depth: 32 00:06:07.718 Allocate depth: 32 00:06:07.718 # threads/core: 2 00:06:07.718 Run time: 1 seconds 00:06:07.718 Verify: Yes 00:06:07.718 00:06:07.718 Running for 1 seconds... 00:06:07.718 00:06:07.718 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.718 ------------------------------------------------------------------------------------ 00:06:07.718 0,1 38784/s 71 MiB/s 0 0 00:06:07.718 0,0 38624/s 71 MiB/s 0 0 00:06:07.718 ==================================================================================== 00:06:07.718 Total 77408/s 302 MiB/s 0 0' 00:06:07.718 18:01:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.718 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.718 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.718 18:01:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.718 18:01:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.718 18:01:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.718 18:01:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.718 18:01:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.718 18:01:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.718 18:01:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.718 18:01:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.719 18:01:05 -- accel/accel.sh@42 -- # jq -r . 00:06:07.719 [2024-04-25 18:01:05.575873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:07.719 [2024-04-25 18:01:05.575996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59440 ] 00:06:07.977 [2024-04-25 18:01:05.714548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.977 [2024-04-25 18:01:05.860082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.235 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.235 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.235 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.235 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.235 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.235 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.235 18:01:05 -- accel/accel.sh@21 -- # val=0x1 00:06:08.235 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.235 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=decompress 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=software 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=32 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=32 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=2 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val=Yes 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.236 18:01:05 -- accel/accel.sh@21 -- # val= 00:06:08.236 18:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:08.236 18:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@21 -- # val= 00:06:09.611 18:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:09.611 18:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:09.611 18:01:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.611 18:01:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:09.611 18:01:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.611 00:06:09.611 real 0m3.319s 00:06:09.611 user 0m2.820s 00:06:09.611 sys 0m0.290s 00:06:09.611 18:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.611 18:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:09.611 ************************************ 00:06:09.611 END TEST accel_decomp_mthread 00:06:09.611 ************************************ 00:06:09.611 18:01:07 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.611 18:01:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:09.611 18:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.611 18:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:09.611 ************************************ 00:06:09.611 START TEST accel_deomp_full_mthread 00:06:09.611 ************************************ 00:06:09.611 18:01:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.611 18:01:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.611 18:01:07 -- accel/accel.sh@17 -- # local accel_module 00:06:09.611 18:01:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.611 18:01:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.611 18:01:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.611 18:01:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.611 18:01:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.611 18:01:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.611 18:01:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.611 18:01:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.611 18:01:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.611 18:01:07 -- accel/accel.sh@42 -- # jq -r . 00:06:09.611 [2024-04-25 18:01:07.296450] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:09.611 [2024-04-25 18:01:07.296545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59480 ] 00:06:09.611 [2024-04-25 18:01:07.431702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.870 [2024-04-25 18:01:07.570030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.245 18:01:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:11.245 00:06:11.245 SPDK Configuration: 00:06:11.245 Core mask: 0x1 00:06:11.245 00:06:11.245 Accel Perf Configuration: 00:06:11.245 Workload Type: decompress 00:06:11.245 Transfer size: 111250 bytes 00:06:11.245 Vector count 1 00:06:11.245 Module: software 00:06:11.245 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.245 Queue depth: 32 00:06:11.245 Allocate depth: 32 00:06:11.245 # threads/core: 2 00:06:11.245 Run time: 1 seconds 00:06:11.245 Verify: Yes 00:06:11.245 00:06:11.245 Running for 1 seconds... 00:06:11.245 00:06:11.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.245 ------------------------------------------------------------------------------------ 00:06:11.245 0,1 2496/s 103 MiB/s 0 0 00:06:11.245 0,0 2496/s 103 MiB/s 0 0 00:06:11.245 ==================================================================================== 00:06:11.245 Total 4992/s 529 MiB/s 0 0' 00:06:11.245 18:01:08 -- accel/accel.sh@20 -- # IFS=: 00:06:11.245 18:01:08 -- accel/accel.sh@20 -- # read -r var val 00:06:11.245 18:01:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.245 18:01:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.245 18:01:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.245 18:01:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.245 18:01:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.245 18:01:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.245 18:01:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.245 18:01:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.245 18:01:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.245 18:01:08 -- accel/accel.sh@42 -- # jq -r . 00:06:11.245 [2024-04-25 18:01:08.944029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:11.245 [2024-04-25 18:01:08.944106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59500 ] 00:06:11.245 [2024-04-25 18:01:09.074443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.503 [2024-04-25 18:01:09.193579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=0x1 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=decompress 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=software 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=32 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=32 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=2 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.503 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.503 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.503 18:01:09 -- accel/accel.sh@21 -- # val=Yes 00:06:11.504 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.504 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.504 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:11.504 18:01:09 -- accel/accel.sh@21 -- # val= 00:06:11.504 18:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:11.504 18:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@21 -- # val= 00:06:12.881 18:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.881 18:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.881 18:01:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.881 18:01:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:12.881 18:01:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.881 00:06:12.881 real 0m3.302s 00:06:12.881 user 0m2.808s 00:06:12.881 sys 0m0.284s 00:06:12.881 18:01:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.881 ************************************ 00:06:12.881 END TEST accel_deomp_full_mthread 00:06:12.881 ************************************ 00:06:12.881 18:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 18:01:10 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:12.881 18:01:10 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:12.881 18:01:10 -- accel/accel.sh@129 -- # build_accel_config 00:06:12.881 18:01:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.881 18:01:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.881 18:01:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:12.881 18:01:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.881 18:01:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.881 18:01:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.881 18:01:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.881 18:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 18:01:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.881 18:01:10 -- accel/accel.sh@42 -- # jq -r . 00:06:12.881 ************************************ 00:06:12.881 START TEST accel_dif_functional_tests 00:06:12.881 ************************************ 00:06:12.881 18:01:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:12.881 [2024-04-25 18:01:10.672627] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:12.881 [2024-04-25 18:01:10.672743] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:06:12.881 [2024-04-25 18:01:10.800349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.140 [2024-04-25 18:01:10.952456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.140 [2024-04-25 18:01:10.952635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.140 [2024-04-25 18:01:10.952637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.399 00:06:13.399 00:06:13.399 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.399 http://cunit.sourceforge.net/ 00:06:13.399 00:06:13.399 00:06:13.399 Suite: accel_dif 00:06:13.399 Test: verify: DIF generated, GUARD check ...passed 00:06:13.399 Test: verify: DIF generated, APPTAG check ...passed 00:06:13.399 Test: verify: DIF generated, REFTAG check ...passed 00:06:13.399 Test: verify: DIF not generated, GUARD check ...passed 00:06:13.399 Test: verify: DIF not generated, APPTAG check ...passed 00:06:13.399 Test: verify: DIF not generated, REFTAG check ...passed 00:06:13.399 Test: verify: APPTAG correct, APPTAG check ...[2024-04-25 18:01:11.086205] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:13.399 [2024-04-25 18:01:11.086291] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:13.399 [2024-04-25 18:01:11.086340] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:13.399 [2024-04-25 18:01:11.086384] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:13.399 [2024-04-25 18:01:11.086408] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:13.399 [2024-04-25 18:01:11.086433] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:13.399 passed 00:06:13.399 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:13.399 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:13.399 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:13.399 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:13.399 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:13.399 Test: generate copy: DIF generated, GUARD check ...passed 00:06:13.399 Test: generate copy: DIF generated, APTTAG check ...[2024-04-25 18:01:11.086485] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:13.399 [2024-04-25 18:01:11.086621] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:13.399 passed 00:06:13.399 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:13.399 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:13.399 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:13.399 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:13.399 Test: generate copy: iovecs-len validate ...passed 00:06:13.399 Test: generate copy: buffer alignment validate ...passed 00:06:13.399 00:06:13.399 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.399 suites 1 1 n/a 0 0 00:06:13.399 tests 20 20 20 0 0 00:06:13.399 asserts 204 204 204 0 n/a 00:06:13.399 00:06:13.399 Elapsed time = 0.002 seconds 00:06:13.399 [2024-04-25 18:01:11.086853] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:13.658 00:06:13.658 real 0m0.891s 00:06:13.658 user 0m1.384s 00:06:13.658 sys 0m0.202s 00:06:13.658 18:01:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.658 18:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.658 ************************************ 00:06:13.658 END TEST accel_dif_functional_tests 00:06:13.658 ************************************ 00:06:13.658 00:06:13.658 real 1m7.499s 00:06:13.658 user 1m12.120s 00:06:13.658 sys 0m6.721s 00:06:13.658 18:01:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.658 18:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.658 ************************************ 00:06:13.658 END TEST accel 00:06:13.658 ************************************ 00:06:13.917 18:01:11 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:13.917 18:01:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.917 18:01:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.917 18:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 ************************************ 00:06:13.917 START TEST accel_rpc 00:06:13.917 ************************************ 00:06:13.917 18:01:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:13.917 * Looking for test storage... 00:06:13.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:13.917 18:01:11 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.917 18:01:11 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59604 00:06:13.917 18:01:11 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:13.917 18:01:11 -- accel/accel_rpc.sh@15 -- # waitforlisten 59604 00:06:13.917 18:01:11 -- common/autotest_common.sh@819 -- # '[' -z 59604 ']' 00:06:13.917 18:01:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.917 18:01:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.917 18:01:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.917 18:01:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.917 18:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 [2024-04-25 18:01:11.769726] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:13.917 [2024-04-25 18:01:11.769845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59604 ] 00:06:14.175 [2024-04-25 18:01:11.906884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.175 [2024-04-25 18:01:12.054032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.175 [2024-04-25 18:01:12.054224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.131 18:01:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.132 18:01:12 -- common/autotest_common.sh@852 -- # return 0 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:15.132 18:01:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.132 18:01:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.132 18:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.132 ************************************ 00:06:15.132 START TEST accel_assign_opcode 00:06:15.132 ************************************ 00:06:15.132 18:01:12 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:15.132 18:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.132 18:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.132 [2024-04-25 18:01:12.766827] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:15.132 18:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:15.132 18:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.132 18:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.132 [2024-04-25 18:01:12.774814] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:15.132 18:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.132 18:01:12 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:15.132 18:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.132 18:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.389 18:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.389 18:01:13 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:15.389 18:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.389 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:15.389 18:01:13 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:15.389 18:01:13 -- accel/accel_rpc.sh@42 -- # grep software 00:06:15.389 18:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.389 software 00:06:15.389 00:06:15.389 real 0m0.429s 00:06:15.389 user 0m0.051s 00:06:15.389 sys 0m0.013s 00:06:15.389 18:01:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.389 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:15.390 ************************************ 00:06:15.390 END TEST accel_assign_opcode 00:06:15.390 ************************************ 00:06:15.390 18:01:13 -- accel/accel_rpc.sh@55 -- # killprocess 59604 00:06:15.390 18:01:13 -- common/autotest_common.sh@926 -- # '[' -z 59604 ']' 00:06:15.390 18:01:13 -- common/autotest_common.sh@930 -- # kill -0 59604 00:06:15.390 18:01:13 -- common/autotest_common.sh@931 -- # uname 00:06:15.390 18:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.390 18:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59604 00:06:15.390 18:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.390 killing process with pid 59604 00:06:15.390 18:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.390 18:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59604' 00:06:15.390 18:01:13 -- common/autotest_common.sh@945 -- # kill 59604 00:06:15.390 18:01:13 -- common/autotest_common.sh@950 -- # wait 59604 00:06:16.325 00:06:16.325 real 0m2.274s 00:06:16.325 user 0m2.240s 00:06:16.325 sys 0m0.598s 00:06:16.325 18:01:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.325 ************************************ 00:06:16.325 END TEST accel_rpc 00:06:16.325 ************************************ 00:06:16.325 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.325 18:01:13 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.325 18:01:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.325 18:01:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.325 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.325 ************************************ 00:06:16.325 START TEST app_cmdline 00:06:16.325 ************************************ 00:06:16.325 18:01:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.325 * Looking for test storage... 00:06:16.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:16.325 18:01:14 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.325 18:01:14 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59721 00:06:16.325 18:01:14 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.325 18:01:14 -- app/cmdline.sh@18 -- # waitforlisten 59721 00:06:16.325 18:01:14 -- common/autotest_common.sh@819 -- # '[' -z 59721 ']' 00:06:16.325 18:01:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.325 18:01:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.325 18:01:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.325 18:01:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.325 18:01:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.325 [2024-04-25 18:01:14.109072] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:16.325 [2024-04-25 18:01:14.109229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:06:16.325 [2024-04-25 18:01:14.247458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.583 [2024-04-25 18:01:14.392520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.584 [2024-04-25 18:01:14.392737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.519 18:01:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.519 18:01:15 -- common/autotest_common.sh@852 -- # return 0 00:06:17.519 18:01:15 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:17.519 { 00:06:17.519 "fields": { 00:06:17.519 "commit": "36faa8c31", 00:06:17.519 "major": 24, 00:06:17.519 "minor": 1, 00:06:17.519 "patch": 1, 00:06:17.519 "suffix": "-pre" 00:06:17.519 }, 00:06:17.519 "version": "SPDK v24.01.1-pre git sha1 36faa8c31" 00:06:17.519 } 00:06:17.519 18:01:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.519 18:01:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.520 18:01:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.520 18:01:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.520 18:01:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.520 18:01:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.520 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:17.520 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.520 18:01:15 -- app/cmdline.sh@26 -- # sort 00:06:17.520 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:17.520 18:01:15 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.520 18:01:15 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.520 18:01:15 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.520 18:01:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.520 18:01:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.520 18:01:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.520 18:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.520 18:01:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.520 18:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.520 18:01:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.520 18:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.520 18:01:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.520 18:01:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:17.520 18:01:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.778 2024/04/25 18:01:15 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:17.778 request: 00:06:17.778 { 00:06:17.778 "method": "env_dpdk_get_mem_stats", 00:06:17.778 "params": {} 00:06:17.778 } 00:06:17.778 Got JSON-RPC error response 00:06:17.778 GoRPCClient: error on JSON-RPC call 00:06:17.778 18:01:15 -- common/autotest_common.sh@643 -- # es=1 00:06:17.778 18:01:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.778 18:01:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.778 18:01:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.778 18:01:15 -- app/cmdline.sh@1 -- # killprocess 59721 00:06:17.778 18:01:15 -- common/autotest_common.sh@926 -- # '[' -z 59721 ']' 00:06:17.778 18:01:15 -- common/autotest_common.sh@930 -- # kill -0 59721 00:06:17.778 18:01:15 -- common/autotest_common.sh@931 -- # uname 00:06:17.778 18:01:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.778 18:01:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59721 00:06:17.778 18:01:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.778 killing process with pid 59721 00:06:17.778 18:01:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.778 18:01:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59721' 00:06:17.778 18:01:15 -- common/autotest_common.sh@945 -- # kill 59721 00:06:17.778 18:01:15 -- common/autotest_common.sh@950 -- # wait 59721 00:06:18.345 00:06:18.345 real 0m2.313s 00:06:18.345 user 0m2.702s 00:06:18.345 sys 0m0.604s 00:06:18.345 18:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.345 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.345 ************************************ 00:06:18.345 END TEST app_cmdline 00:06:18.345 ************************************ 00:06:18.603 18:01:16 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:18.603 18:01:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.603 18:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.603 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.603 ************************************ 00:06:18.603 START TEST version 00:06:18.603 ************************************ 00:06:18.603 18:01:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:18.603 * Looking for test storage... 00:06:18.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.603 18:01:16 -- app/version.sh@17 -- # get_header_version major 00:06:18.603 18:01:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.603 18:01:16 -- app/version.sh@14 -- # cut -f2 00:06:18.603 18:01:16 -- app/version.sh@14 -- # tr -d '"' 00:06:18.603 18:01:16 -- app/version.sh@17 -- # major=24 00:06:18.603 18:01:16 -- app/version.sh@18 -- # get_header_version minor 00:06:18.603 18:01:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.603 18:01:16 -- app/version.sh@14 -- # cut -f2 00:06:18.603 18:01:16 -- app/version.sh@14 -- # tr -d '"' 00:06:18.603 18:01:16 -- app/version.sh@18 -- # minor=1 00:06:18.603 18:01:16 -- app/version.sh@19 -- # get_header_version patch 00:06:18.603 18:01:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.603 18:01:16 -- app/version.sh@14 -- # cut -f2 00:06:18.603 18:01:16 -- app/version.sh@14 -- # tr -d '"' 00:06:18.603 18:01:16 -- app/version.sh@19 -- # patch=1 00:06:18.603 18:01:16 -- app/version.sh@20 -- # get_header_version suffix 00:06:18.603 18:01:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.603 18:01:16 -- app/version.sh@14 -- # cut -f2 00:06:18.603 18:01:16 -- app/version.sh@14 -- # tr -d '"' 00:06:18.603 18:01:16 -- app/version.sh@20 -- # suffix=-pre 00:06:18.603 18:01:16 -- app/version.sh@22 -- # version=24.1 00:06:18.603 18:01:16 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.603 18:01:16 -- app/version.sh@25 -- # version=24.1.1 00:06:18.603 18:01:16 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:18.603 18:01:16 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:18.603 18:01:16 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.603 18:01:16 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:18.603 18:01:16 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:18.603 00:06:18.603 real 0m0.161s 00:06:18.603 user 0m0.082s 00:06:18.603 sys 0m0.115s 00:06:18.603 18:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.603 ************************************ 00:06:18.603 END TEST version 00:06:18.603 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.603 ************************************ 00:06:18.604 18:01:16 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:18.604 18:01:16 -- spdk/autotest.sh@204 -- # uname -s 00:06:18.862 18:01:16 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:18.862 18:01:16 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:18.862 18:01:16 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:18.862 18:01:16 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:18.862 18:01:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:18.862 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.862 18:01:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:18.862 18:01:16 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:18.862 18:01:16 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.862 18:01:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:18.862 18:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.862 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.862 ************************************ 00:06:18.862 START TEST nvmf_tcp 00:06:18.862 ************************************ 00:06:18.862 18:01:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.862 * Looking for test storage... 00:06:18.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.862 18:01:16 -- nvmf/common.sh@7 -- # uname -s 00:06:18.862 18:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.862 18:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.862 18:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.862 18:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.862 18:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.862 18:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.862 18:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.862 18:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.862 18:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.862 18:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.862 18:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:18.862 18:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:18.862 18:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.862 18:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.862 18:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:18.862 18:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.862 18:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.862 18:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.862 18:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.862 18:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.862 18:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.862 18:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.862 18:01:16 -- paths/export.sh@5 -- # export PATH 00:06:18.862 18:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.862 18:01:16 -- nvmf/common.sh@46 -- # : 0 00:06:18.862 18:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:18.862 18:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:18.862 18:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.862 18:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.862 18:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:18.862 18:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:18.862 18:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:18.862 18:01:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:18.862 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:18.862 18:01:16 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:18.863 18:01:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:18.863 18:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.863 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.863 ************************************ 00:06:18.863 START TEST nvmf_example 00:06:18.863 ************************************ 00:06:18.863 18:01:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:19.121 * Looking for test storage... 00:06:19.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.121 18:01:16 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.121 18:01:16 -- nvmf/common.sh@7 -- # uname -s 00:06:19.121 18:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.121 18:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.121 18:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.122 18:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.122 18:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.122 18:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.122 18:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.122 18:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.122 18:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.122 18:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.122 18:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:19.122 18:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:19.122 18:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.122 18:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.122 18:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:19.122 18:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.122 18:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.122 18:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.122 18:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.122 18:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.122 18:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.122 18:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.122 18:01:16 -- paths/export.sh@5 -- # export PATH 00:06:19.122 18:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.122 18:01:16 -- nvmf/common.sh@46 -- # : 0 00:06:19.122 18:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:19.122 18:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:19.122 18:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:19.122 18:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.122 18:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.122 18:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:19.122 18:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:19.122 18:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:19.122 18:01:16 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:19.122 18:01:16 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:19.122 18:01:16 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:19.122 18:01:16 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:19.122 18:01:16 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:19.122 18:01:16 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:19.122 18:01:16 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:19.122 18:01:16 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:19.122 18:01:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.122 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.122 18:01:16 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:19.122 18:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:19.122 18:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.122 18:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:19.122 18:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:19.122 18:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:19.122 18:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.122 18:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.122 18:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.122 18:01:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:19.122 18:01:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:19.122 18:01:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:19.122 18:01:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:19.122 18:01:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:19.122 18:01:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:19.122 18:01:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.122 18:01:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.122 18:01:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:19.122 18:01:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:19.122 18:01:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:19.122 18:01:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:19.122 18:01:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:19.122 18:01:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.122 18:01:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:19.122 18:01:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:19.122 18:01:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:19.122 18:01:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:19.122 18:01:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:19.122 Cannot find device "nvmf_init_br" 00:06:19.122 18:01:16 -- nvmf/common.sh@153 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:19.122 Cannot find device "nvmf_tgt_br" 00:06:19.122 18:01:16 -- nvmf/common.sh@154 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:19.122 Cannot find device "nvmf_tgt_br2" 00:06:19.122 18:01:16 -- nvmf/common.sh@155 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:19.122 Cannot find device "nvmf_init_br" 00:06:19.122 18:01:16 -- nvmf/common.sh@156 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:19.122 Cannot find device "nvmf_tgt_br" 00:06:19.122 18:01:16 -- nvmf/common.sh@157 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:19.122 Cannot find device "nvmf_tgt_br2" 00:06:19.122 18:01:16 -- nvmf/common.sh@158 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:19.122 Cannot find device "nvmf_br" 00:06:19.122 18:01:16 -- nvmf/common.sh@159 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:19.122 Cannot find device "nvmf_init_if" 00:06:19.122 18:01:16 -- nvmf/common.sh@160 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:19.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.122 18:01:16 -- nvmf/common.sh@161 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:19.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.122 18:01:16 -- nvmf/common.sh@162 -- # true 00:06:19.122 18:01:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:19.122 18:01:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:19.122 18:01:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:19.122 18:01:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:19.122 18:01:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:19.122 18:01:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:19.122 18:01:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:19.122 18:01:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:19.122 18:01:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:19.122 18:01:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:19.122 18:01:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:19.122 18:01:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:19.122 18:01:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:19.122 18:01:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:19.381 18:01:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:19.381 18:01:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:19.381 18:01:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:19.381 18:01:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:19.381 18:01:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:19.381 18:01:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:19.381 18:01:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:19.381 18:01:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:19.381 18:01:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:19.381 18:01:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:19.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:06:19.381 00:06:19.381 --- 10.0.0.2 ping statistics --- 00:06:19.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.381 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:19.381 18:01:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:19.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:19.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:06:19.381 00:06:19.381 --- 10.0.0.3 ping statistics --- 00:06:19.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.381 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:06:19.381 18:01:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:19.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:06:19.381 00:06:19.381 --- 10.0.0.1 ping statistics --- 00:06:19.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.381 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:06:19.381 18:01:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.381 18:01:17 -- nvmf/common.sh@421 -- # return 0 00:06:19.381 18:01:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:19.381 18:01:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.381 18:01:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:19.381 18:01:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:19.381 18:01:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.381 18:01:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:19.381 18:01:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:19.381 18:01:17 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:19.381 18:01:17 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:19.381 18:01:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.381 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.381 18:01:17 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:19.381 18:01:17 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:19.381 18:01:17 -- target/nvmf_example.sh@34 -- # nvmfpid=60078 00:06:19.381 18:01:17 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:19.381 18:01:17 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:19.381 18:01:17 -- target/nvmf_example.sh@36 -- # waitforlisten 60078 00:06:19.381 18:01:17 -- common/autotest_common.sh@819 -- # '[' -z 60078 ']' 00:06:19.381 18:01:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.381 18:01:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.382 18:01:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.382 18:01:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.382 18:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.772 18:01:18 -- common/autotest_common.sh@852 -- # return 0 00:06:20.772 18:01:18 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:20.772 18:01:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.772 18:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.772 18:01:18 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:20.772 18:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.772 18:01:18 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:20.772 18:01:18 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.772 18:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.772 18:01:18 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:20.772 18:01:18 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:20.772 18:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.772 18:01:18 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.772 18:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.772 18:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.772 18:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.772 18:01:18 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:20.772 18:01:18 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:30.777 Initializing NVMe Controllers 00:06:30.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:30.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:30.778 Initialization complete. Launching workers. 00:06:30.778 ======================================================== 00:06:30.778 Latency(us) 00:06:30.778 Device Information : IOPS MiB/s Average min max 00:06:30.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15174.45 59.28 4217.32 786.57 23154.57 00:06:30.778 ======================================================== 00:06:30.778 Total : 15174.45 59.28 4217.32 786.57 23154.57 00:06:30.778 00:06:30.778 18:01:28 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:30.778 18:01:28 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:30.778 18:01:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:30.778 18:01:28 -- nvmf/common.sh@116 -- # sync 00:06:30.778 18:01:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:30.778 18:01:28 -- nvmf/common.sh@119 -- # set +e 00:06:30.778 18:01:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:30.778 18:01:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:30.778 rmmod nvme_tcp 00:06:31.036 rmmod nvme_fabrics 00:06:31.036 rmmod nvme_keyring 00:06:31.036 18:01:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:31.036 18:01:28 -- nvmf/common.sh@123 -- # set -e 00:06:31.036 18:01:28 -- nvmf/common.sh@124 -- # return 0 00:06:31.036 18:01:28 -- nvmf/common.sh@477 -- # '[' -n 60078 ']' 00:06:31.036 18:01:28 -- nvmf/common.sh@478 -- # killprocess 60078 00:06:31.036 18:01:28 -- common/autotest_common.sh@926 -- # '[' -z 60078 ']' 00:06:31.036 18:01:28 -- common/autotest_common.sh@930 -- # kill -0 60078 00:06:31.036 18:01:28 -- common/autotest_common.sh@931 -- # uname 00:06:31.036 18:01:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.036 18:01:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60078 00:06:31.036 18:01:28 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:31.036 18:01:28 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:31.036 killing process with pid 60078 00:06:31.036 18:01:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60078' 00:06:31.036 18:01:28 -- common/autotest_common.sh@945 -- # kill 60078 00:06:31.036 18:01:28 -- common/autotest_common.sh@950 -- # wait 60078 00:06:31.294 nvmf threads initialize successfully 00:06:31.294 bdev subsystem init successfully 00:06:31.294 created a nvmf target service 00:06:31.294 create targets's poll groups done 00:06:31.294 all subsystems of target started 00:06:31.294 nvmf target is running 00:06:31.294 all subsystems of target stopped 00:06:31.294 destroy targets's poll groups done 00:06:31.294 destroyed the nvmf target service 00:06:31.294 bdev subsystem finish successfully 00:06:31.294 nvmf threads destroy successfully 00:06:31.294 18:01:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:31.294 18:01:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:31.294 18:01:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:31.294 18:01:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.294 18:01:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:31.294 18:01:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.294 18:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.294 18:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.294 18:01:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:31.294 18:01:29 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:31.294 18:01:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:31.294 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.294 00:06:31.294 real 0m12.457s 00:06:31.294 user 0m44.530s 00:06:31.294 sys 0m2.110s 00:06:31.294 18:01:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.294 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.294 ************************************ 00:06:31.294 END TEST nvmf_example 00:06:31.294 ************************************ 00:06:31.294 18:01:29 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.294 18:01:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:31.294 18:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.294 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.555 ************************************ 00:06:31.555 START TEST nvmf_filesystem 00:06:31.555 ************************************ 00:06:31.555 18:01:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.555 * Looking for test storage... 00:06:31.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.555 18:01:29 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:31.555 18:01:29 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:31.555 18:01:29 -- common/autotest_common.sh@34 -- # set -e 00:06:31.555 18:01:29 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:31.555 18:01:29 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:31.555 18:01:29 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:31.555 18:01:29 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:31.555 18:01:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:31.555 18:01:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:31.555 18:01:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:31.555 18:01:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:31.555 18:01:29 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:31.555 18:01:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:31.555 18:01:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:31.555 18:01:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:31.555 18:01:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:31.555 18:01:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:31.555 18:01:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:31.555 18:01:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:31.555 18:01:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:31.555 18:01:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:31.555 18:01:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:31.555 18:01:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:31.555 18:01:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:31.555 18:01:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:31.555 18:01:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.555 18:01:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:31.556 18:01:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:31.556 18:01:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:31.556 18:01:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:31.556 18:01:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:31.556 18:01:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:31.556 18:01:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:31.556 18:01:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:31.556 18:01:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:31.556 18:01:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:31.556 18:01:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:31.556 18:01:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:31.556 18:01:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:31.556 18:01:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:31.556 18:01:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:31.556 18:01:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:31.556 18:01:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:31.556 18:01:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:31.556 18:01:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:31.556 18:01:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:31.556 18:01:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:31.556 18:01:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:31.556 18:01:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:31.556 18:01:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:31.556 18:01:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:31.556 18:01:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:31.556 18:01:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:31.556 18:01:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:31.556 18:01:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:31.556 18:01:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:31.556 18:01:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:31.556 18:01:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:31.556 18:01:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:31.556 18:01:29 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:31.556 18:01:29 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:31.556 18:01:29 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:31.556 18:01:29 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:31.556 18:01:29 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:31.556 18:01:29 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:06:31.556 18:01:29 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:31.556 18:01:29 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:06:31.556 18:01:29 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:31.556 18:01:29 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:31.556 18:01:29 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:31.556 18:01:29 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:31.556 18:01:29 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:31.556 18:01:29 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:31.556 18:01:29 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:31.556 18:01:29 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:06:31.556 18:01:29 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:31.556 18:01:29 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:31.556 18:01:29 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:31.556 18:01:29 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:31.556 18:01:29 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:31.556 18:01:29 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:31.556 18:01:29 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:31.556 18:01:29 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:31.556 18:01:29 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:31.556 18:01:29 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:31.556 18:01:29 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:31.556 18:01:29 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:31.556 18:01:29 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:31.556 18:01:29 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:31.556 18:01:29 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:31.556 18:01:29 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:31.556 18:01:29 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.556 18:01:29 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:31.556 18:01:29 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.556 18:01:29 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:31.556 18:01:29 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:31.556 18:01:29 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:31.556 18:01:29 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:31.556 18:01:29 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:31.556 18:01:29 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:31.556 18:01:29 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:31.556 18:01:29 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:31.556 #define SPDK_CONFIG_H 00:06:31.556 #define SPDK_CONFIG_APPS 1 00:06:31.556 #define SPDK_CONFIG_ARCH native 00:06:31.556 #undef SPDK_CONFIG_ASAN 00:06:31.556 #define SPDK_CONFIG_AVAHI 1 00:06:31.556 #undef SPDK_CONFIG_CET 00:06:31.556 #define SPDK_CONFIG_COVERAGE 1 00:06:31.556 #define SPDK_CONFIG_CROSS_PREFIX 00:06:31.556 #undef SPDK_CONFIG_CRYPTO 00:06:31.556 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:31.556 #undef SPDK_CONFIG_CUSTOMOCF 00:06:31.556 #undef SPDK_CONFIG_DAOS 00:06:31.556 #define SPDK_CONFIG_DAOS_DIR 00:06:31.556 #define SPDK_CONFIG_DEBUG 1 00:06:31.556 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:31.556 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:31.556 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:31.556 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:31.556 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:31.556 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.556 #define SPDK_CONFIG_EXAMPLES 1 00:06:31.556 #undef SPDK_CONFIG_FC 00:06:31.556 #define SPDK_CONFIG_FC_PATH 00:06:31.556 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:31.556 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:31.556 #undef SPDK_CONFIG_FUSE 00:06:31.556 #undef SPDK_CONFIG_FUZZER 00:06:31.556 #define SPDK_CONFIG_FUZZER_LIB 00:06:31.556 #define SPDK_CONFIG_GOLANG 1 00:06:31.556 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:31.556 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:31.556 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:31.556 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:31.556 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:31.556 #define SPDK_CONFIG_IDXD 1 00:06:31.556 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:31.556 #undef SPDK_CONFIG_IPSEC_MB 00:06:31.556 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:31.556 #define SPDK_CONFIG_ISAL 1 00:06:31.556 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:31.556 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:31.556 #define SPDK_CONFIG_LIBDIR 00:06:31.556 #undef SPDK_CONFIG_LTO 00:06:31.556 #define SPDK_CONFIG_MAX_LCORES 00:06:31.556 #define SPDK_CONFIG_NVME_CUSE 1 00:06:31.556 #undef SPDK_CONFIG_OCF 00:06:31.556 #define SPDK_CONFIG_OCF_PATH 00:06:31.556 #define SPDK_CONFIG_OPENSSL_PATH 00:06:31.556 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:31.556 #undef SPDK_CONFIG_PGO_USE 00:06:31.556 #define SPDK_CONFIG_PREFIX /usr/local 00:06:31.556 #undef SPDK_CONFIG_RAID5F 00:06:31.556 #undef SPDK_CONFIG_RBD 00:06:31.556 #define SPDK_CONFIG_RDMA 1 00:06:31.556 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:31.556 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:31.556 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:31.556 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:31.556 #define SPDK_CONFIG_SHARED 1 00:06:31.556 #undef SPDK_CONFIG_SMA 00:06:31.556 #define SPDK_CONFIG_TESTS 1 00:06:31.556 #undef SPDK_CONFIG_TSAN 00:06:31.556 #define SPDK_CONFIG_UBLK 1 00:06:31.556 #define SPDK_CONFIG_UBSAN 1 00:06:31.556 #undef SPDK_CONFIG_UNIT_TESTS 00:06:31.556 #undef SPDK_CONFIG_URING 00:06:31.556 #define SPDK_CONFIG_URING_PATH 00:06:31.556 #undef SPDK_CONFIG_URING_ZNS 00:06:31.556 #define SPDK_CONFIG_USDT 1 00:06:31.556 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:31.556 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:31.556 #define SPDK_CONFIG_VFIO_USER 1 00:06:31.556 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:31.556 #define SPDK_CONFIG_VHOST 1 00:06:31.556 #define SPDK_CONFIG_VIRTIO 1 00:06:31.556 #undef SPDK_CONFIG_VTUNE 00:06:31.556 #define SPDK_CONFIG_VTUNE_DIR 00:06:31.556 #define SPDK_CONFIG_WERROR 1 00:06:31.556 #define SPDK_CONFIG_WPDK_DIR 00:06:31.556 #undef SPDK_CONFIG_XNVME 00:06:31.556 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:31.556 18:01:29 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:31.556 18:01:29 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.556 18:01:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.556 18:01:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.556 18:01:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.556 18:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.556 18:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.557 18:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.557 18:01:29 -- paths/export.sh@5 -- # export PATH 00:06:31.557 18:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.557 18:01:29 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:31.557 18:01:29 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:31.557 18:01:29 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:31.557 18:01:29 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:31.557 18:01:29 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:31.557 18:01:29 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:31.557 18:01:29 -- pm/common@16 -- # TEST_TAG=N/A 00:06:31.557 18:01:29 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:31.557 18:01:29 -- common/autotest_common.sh@52 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:31.557 18:01:29 -- common/autotest_common.sh@56 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:31.557 18:01:29 -- common/autotest_common.sh@58 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:31.557 18:01:29 -- common/autotest_common.sh@60 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:31.557 18:01:29 -- common/autotest_common.sh@62 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:31.557 18:01:29 -- common/autotest_common.sh@64 -- # : 00:06:31.557 18:01:29 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:31.557 18:01:29 -- common/autotest_common.sh@66 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:31.557 18:01:29 -- common/autotest_common.sh@68 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:31.557 18:01:29 -- common/autotest_common.sh@70 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:31.557 18:01:29 -- common/autotest_common.sh@72 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:31.557 18:01:29 -- common/autotest_common.sh@74 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:31.557 18:01:29 -- common/autotest_common.sh@76 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:31.557 18:01:29 -- common/autotest_common.sh@78 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:31.557 18:01:29 -- common/autotest_common.sh@80 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:31.557 18:01:29 -- common/autotest_common.sh@82 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:31.557 18:01:29 -- common/autotest_common.sh@84 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:31.557 18:01:29 -- common/autotest_common.sh@86 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:31.557 18:01:29 -- common/autotest_common.sh@88 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:31.557 18:01:29 -- common/autotest_common.sh@90 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:31.557 18:01:29 -- common/autotest_common.sh@92 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:31.557 18:01:29 -- common/autotest_common.sh@94 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:31.557 18:01:29 -- common/autotest_common.sh@96 -- # : tcp 00:06:31.557 18:01:29 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:31.557 18:01:29 -- common/autotest_common.sh@98 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:31.557 18:01:29 -- common/autotest_common.sh@100 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:31.557 18:01:29 -- common/autotest_common.sh@102 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:31.557 18:01:29 -- common/autotest_common.sh@104 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:31.557 18:01:29 -- common/autotest_common.sh@106 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:31.557 18:01:29 -- common/autotest_common.sh@108 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:31.557 18:01:29 -- common/autotest_common.sh@110 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:31.557 18:01:29 -- common/autotest_common.sh@112 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:31.557 18:01:29 -- common/autotest_common.sh@114 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:31.557 18:01:29 -- common/autotest_common.sh@116 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:31.557 18:01:29 -- common/autotest_common.sh@118 -- # : 00:06:31.557 18:01:29 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:31.557 18:01:29 -- common/autotest_common.sh@120 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:31.557 18:01:29 -- common/autotest_common.sh@122 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:31.557 18:01:29 -- common/autotest_common.sh@124 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:31.557 18:01:29 -- common/autotest_common.sh@126 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:31.557 18:01:29 -- common/autotest_common.sh@128 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:31.557 18:01:29 -- common/autotest_common.sh@130 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:31.557 18:01:29 -- common/autotest_common.sh@132 -- # : 00:06:31.557 18:01:29 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:31.557 18:01:29 -- common/autotest_common.sh@134 -- # : true 00:06:31.557 18:01:29 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:31.557 18:01:29 -- common/autotest_common.sh@136 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:31.557 18:01:29 -- common/autotest_common.sh@138 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:31.557 18:01:29 -- common/autotest_common.sh@140 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:31.557 18:01:29 -- common/autotest_common.sh@142 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:31.557 18:01:29 -- common/autotest_common.sh@144 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:31.557 18:01:29 -- common/autotest_common.sh@146 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:31.557 18:01:29 -- common/autotest_common.sh@148 -- # : 00:06:31.557 18:01:29 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:31.557 18:01:29 -- common/autotest_common.sh@150 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:31.557 18:01:29 -- common/autotest_common.sh@152 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:31.557 18:01:29 -- common/autotest_common.sh@154 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:31.557 18:01:29 -- common/autotest_common.sh@156 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:31.557 18:01:29 -- common/autotest_common.sh@158 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:31.557 18:01:29 -- common/autotest_common.sh@160 -- # : 0 00:06:31.557 18:01:29 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:31.557 18:01:29 -- common/autotest_common.sh@163 -- # : 00:06:31.557 18:01:29 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:31.557 18:01:29 -- common/autotest_common.sh@165 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:31.557 18:01:29 -- common/autotest_common.sh@167 -- # : 1 00:06:31.557 18:01:29 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:31.557 18:01:29 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.557 18:01:29 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.558 18:01:29 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.558 18:01:29 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.558 18:01:29 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.558 18:01:29 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:31.558 18:01:29 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:31.558 18:01:29 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.558 18:01:29 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.558 18:01:29 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.558 18:01:29 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.558 18:01:29 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:31.558 18:01:29 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:31.558 18:01:29 -- common/autotest_common.sh@196 -- # cat 00:06:31.558 18:01:29 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:31.558 18:01:29 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.558 18:01:29 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.558 18:01:29 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.558 18:01:29 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.558 18:01:29 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:31.558 18:01:29 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:31.558 18:01:29 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.558 18:01:29 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.558 18:01:29 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.558 18:01:29 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.558 18:01:29 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.558 18:01:29 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.558 18:01:29 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.558 18:01:29 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.558 18:01:29 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:31.558 18:01:29 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:31.558 18:01:29 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:31.558 18:01:29 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:31.558 18:01:29 -- common/autotest_common.sh@249 -- # valgrind= 00:06:31.558 18:01:29 -- common/autotest_common.sh@255 -- # uname -s 00:06:31.558 18:01:29 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:31.558 18:01:29 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:31.558 18:01:29 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:31.558 18:01:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:31.558 18:01:29 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:06:31.558 18:01:29 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:31.558 18:01:29 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:31.558 18:01:29 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:31.558 18:01:29 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:31.558 18:01:29 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:31.558 18:01:29 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:31.558 18:01:29 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:31.558 18:01:29 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:31.558 18:01:29 -- common/autotest_common.sh@309 -- # [[ -z 60322 ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@309 -- # kill -0 60322 00:06:31.558 18:01:29 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:31.558 18:01:29 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:31.558 18:01:29 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:31.558 18:01:29 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:31.558 18:01:29 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:31.558 18:01:29 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:31.558 18:01:29 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:31.558 18:01:29 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.3a0P9O 00:06:31.558 18:01:29 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:31.558 18:01:29 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:31.558 18:01:29 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.3a0P9O/tests/target /tmp/spdk.3a0P9O 00:06:31.558 18:01:29 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:31.558 18:01:29 -- common/autotest_common.sh@318 -- # df -T 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=13801955328 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=5222768640 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=13801955328 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=5222768640 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=135168 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:06:31.558 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:06:31.558 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:31.558 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.558 18:01:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:31.559 18:01:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:06:31.559 18:01:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=95859036160 00:06:31.559 18:01:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:06:31.559 18:01:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=3843743744 00:06:31.559 18:01:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:31.559 18:01:29 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:31.559 * Looking for test storage... 00:06:31.559 18:01:29 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:31.559 18:01:29 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:31.559 18:01:29 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.559 18:01:29 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:31.559 18:01:29 -- common/autotest_common.sh@363 -- # mount=/home 00:06:31.559 18:01:29 -- common/autotest_common.sh@365 -- # target_space=13801955328 00:06:31.559 18:01:29 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:31.559 18:01:29 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:31.559 18:01:29 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:06:31.559 18:01:29 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:06:31.559 18:01:29 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:06:31.559 18:01:29 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.559 18:01:29 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.559 18:01:29 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.559 18:01:29 -- common/autotest_common.sh@380 -- # return 0 00:06:31.559 18:01:29 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:31.559 18:01:29 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:31.559 18:01:29 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:31.559 18:01:29 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:31.559 18:01:29 -- common/autotest_common.sh@1672 -- # true 00:06:31.559 18:01:29 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:31.559 18:01:29 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:31.559 18:01:29 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:31.559 18:01:29 -- common/autotest_common.sh@27 -- # exec 00:06:31.559 18:01:29 -- common/autotest_common.sh@29 -- # exec 00:06:31.559 18:01:29 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:31.559 18:01:29 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:31.559 18:01:29 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:31.559 18:01:29 -- common/autotest_common.sh@18 -- # set -x 00:06:31.559 18:01:29 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.559 18:01:29 -- nvmf/common.sh@7 -- # uname -s 00:06:31.559 18:01:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.559 18:01:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.559 18:01:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.559 18:01:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.559 18:01:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.559 18:01:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.559 18:01:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.559 18:01:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.559 18:01:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.559 18:01:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.559 18:01:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:31.559 18:01:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:31.559 18:01:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.559 18:01:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.559 18:01:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:31.559 18:01:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.559 18:01:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.559 18:01:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.559 18:01:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.559 18:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.559 18:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.559 18:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.559 18:01:29 -- paths/export.sh@5 -- # export PATH 00:06:31.559 18:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.559 18:01:29 -- nvmf/common.sh@46 -- # : 0 00:06:31.559 18:01:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:31.559 18:01:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:31.559 18:01:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:31.559 18:01:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.559 18:01:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.559 18:01:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:31.559 18:01:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:31.559 18:01:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:31.559 18:01:29 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:31.559 18:01:29 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:31.559 18:01:29 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:31.559 18:01:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:31.559 18:01:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.559 18:01:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:31.559 18:01:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:31.559 18:01:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:31.559 18:01:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.559 18:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.559 18:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.818 18:01:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:31.818 18:01:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:31.818 18:01:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:31.818 18:01:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:31.818 18:01:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:31.818 18:01:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:31.818 18:01:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.818 18:01:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.818 18:01:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:31.818 18:01:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:31.818 18:01:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:31.818 18:01:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:31.818 18:01:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:31.818 18:01:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.818 18:01:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:31.818 18:01:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:31.818 18:01:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:31.818 18:01:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:31.818 18:01:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:31.818 18:01:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:31.818 Cannot find device "nvmf_tgt_br" 00:06:31.818 18:01:29 -- nvmf/common.sh@154 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:31.818 Cannot find device "nvmf_tgt_br2" 00:06:31.818 18:01:29 -- nvmf/common.sh@155 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:31.818 18:01:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:31.818 Cannot find device "nvmf_tgt_br" 00:06:31.818 18:01:29 -- nvmf/common.sh@157 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:31.818 Cannot find device "nvmf_tgt_br2" 00:06:31.818 18:01:29 -- nvmf/common.sh@158 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:31.818 18:01:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:31.818 18:01:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:31.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.818 18:01:29 -- nvmf/common.sh@161 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:31.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.818 18:01:29 -- nvmf/common.sh@162 -- # true 00:06:31.818 18:01:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:31.818 18:01:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:31.818 18:01:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:31.818 18:01:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:31.818 18:01:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:31.818 18:01:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:31.818 18:01:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:31.818 18:01:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:31.818 18:01:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:31.818 18:01:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:31.818 18:01:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:31.818 18:01:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:31.818 18:01:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:31.818 18:01:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:31.818 18:01:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:32.077 18:01:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:32.077 18:01:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:32.077 18:01:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:32.077 18:01:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:32.077 18:01:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:32.077 18:01:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:32.077 18:01:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:32.077 18:01:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:32.077 18:01:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:32.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:06:32.077 00:06:32.077 --- 10.0.0.2 ping statistics --- 00:06:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.077 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:32.077 18:01:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:32.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:32.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:06:32.077 00:06:32.077 --- 10.0.0.3 ping statistics --- 00:06:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.077 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:06:32.077 18:01:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:32.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:06:32.077 00:06:32.077 --- 10.0.0.1 ping statistics --- 00:06:32.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.077 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:32.077 18:01:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.077 18:01:29 -- nvmf/common.sh@421 -- # return 0 00:06:32.077 18:01:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:32.077 18:01:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.077 18:01:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:32.077 18:01:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:32.077 18:01:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.077 18:01:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:32.077 18:01:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:32.077 18:01:29 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:32.077 18:01:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:32.077 18:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.077 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 ************************************ 00:06:32.077 START TEST nvmf_filesystem_no_in_capsule 00:06:32.077 ************************************ 00:06:32.077 18:01:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:32.077 18:01:29 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:32.077 18:01:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:32.077 18:01:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:32.077 18:01:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:32.077 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.077 18:01:29 -- nvmf/common.sh@469 -- # nvmfpid=60478 00:06:32.077 18:01:29 -- nvmf/common.sh@470 -- # waitforlisten 60478 00:06:32.077 18:01:29 -- common/autotest_common.sh@819 -- # '[' -z 60478 ']' 00:06:32.077 18:01:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.077 18:01:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:32.077 18:01:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.077 18:01:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.077 18:01:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.077 18:01:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 [2024-04-25 18:01:29.936473] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:32.077 [2024-04-25 18:01:29.936563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.336 [2024-04-25 18:01:30.077835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.336 [2024-04-25 18:01:30.209563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.336 [2024-04-25 18:01:30.210956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.336 [2024-04-25 18:01:30.211231] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.336 [2024-04-25 18:01:30.211465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.336 [2024-04-25 18:01:30.211973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.336 [2024-04-25 18:01:30.212177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.336 [2024-04-25 18:01:30.212311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.336 [2024-04-25 18:01:30.212315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.272 18:01:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.272 18:01:30 -- common/autotest_common.sh@852 -- # return 0 00:06:33.272 18:01:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:33.272 18:01:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:33.272 18:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:33.272 18:01:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.272 18:01:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:33.272 18:01:31 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:33.272 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.272 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.272 [2024-04-25 18:01:31.014395] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.272 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.272 18:01:31 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:33.272 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.272 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.530 Malloc1 00:06:33.530 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.530 18:01:31 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:33.530 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.530 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.530 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.530 18:01:31 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:33.530 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.530 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.530 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.530 18:01:31 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.530 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.530 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.530 [2024-04-25 18:01:31.314966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.530 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.530 18:01:31 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:33.530 18:01:31 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:33.530 18:01:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:33.530 18:01:31 -- common/autotest_common.sh@1359 -- # local bs 00:06:33.530 18:01:31 -- common/autotest_common.sh@1360 -- # local nb 00:06:33.530 18:01:31 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:33.530 18:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.530 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.530 18:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.530 18:01:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:33.530 { 00:06:33.530 "aliases": [ 00:06:33.530 "4f549c29-8b21-4754-93da-fa7860ee4039" 00:06:33.530 ], 00:06:33.530 "assigned_rate_limits": { 00:06:33.530 "r_mbytes_per_sec": 0, 00:06:33.530 "rw_ios_per_sec": 0, 00:06:33.530 "rw_mbytes_per_sec": 0, 00:06:33.530 "w_mbytes_per_sec": 0 00:06:33.531 }, 00:06:33.531 "block_size": 512, 00:06:33.531 "claim_type": "exclusive_write", 00:06:33.531 "claimed": true, 00:06:33.531 "driver_specific": {}, 00:06:33.531 "memory_domains": [ 00:06:33.531 { 00:06:33.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.531 "dma_device_type": 2 00:06:33.531 } 00:06:33.531 ], 00:06:33.531 "name": "Malloc1", 00:06:33.531 "num_blocks": 1048576, 00:06:33.531 "product_name": "Malloc disk", 00:06:33.531 "supported_io_types": { 00:06:33.531 "abort": true, 00:06:33.531 "compare": false, 00:06:33.531 "compare_and_write": false, 00:06:33.531 "flush": true, 00:06:33.531 "nvme_admin": false, 00:06:33.531 "nvme_io": false, 00:06:33.531 "read": true, 00:06:33.531 "reset": true, 00:06:33.531 "unmap": true, 00:06:33.531 "write": true, 00:06:33.531 "write_zeroes": true 00:06:33.531 }, 00:06:33.531 "uuid": "4f549c29-8b21-4754-93da-fa7860ee4039", 00:06:33.531 "zoned": false 00:06:33.531 } 00:06:33.531 ]' 00:06:33.531 18:01:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:33.531 18:01:31 -- common/autotest_common.sh@1362 -- # bs=512 00:06:33.531 18:01:31 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:33.789 18:01:31 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:33.789 18:01:31 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:33.789 18:01:31 -- common/autotest_common.sh@1367 -- # echo 512 00:06:33.789 18:01:31 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:33.789 18:01:31 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:33.789 18:01:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:33.789 18:01:31 -- common/autotest_common.sh@1177 -- # local i=0 00:06:33.789 18:01:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:33.789 18:01:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:33.789 18:01:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:36.324 18:01:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:36.324 18:01:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:36.324 18:01:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:36.324 18:01:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:36.324 18:01:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:36.324 18:01:33 -- common/autotest_common.sh@1187 -- # return 0 00:06:36.324 18:01:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:36.324 18:01:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:36.324 18:01:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:36.324 18:01:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:36.324 18:01:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:36.324 18:01:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:36.324 18:01:33 -- setup/common.sh@80 -- # echo 536870912 00:06:36.324 18:01:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:36.324 18:01:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:36.324 18:01:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:36.324 18:01:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:36.324 18:01:33 -- target/filesystem.sh@69 -- # partprobe 00:06:36.324 18:01:33 -- target/filesystem.sh@70 -- # sleep 1 00:06:37.261 18:01:34 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:37.261 18:01:34 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:37.261 18:01:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.261 18:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.261 18:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 ************************************ 00:06:37.261 START TEST filesystem_ext4 00:06:37.261 ************************************ 00:06:37.261 18:01:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:37.261 18:01:34 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:37.261 18:01:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.261 18:01:34 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:37.261 18:01:34 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:37.261 18:01:34 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:37.261 18:01:34 -- common/autotest_common.sh@904 -- # local i=0 00:06:37.261 18:01:34 -- common/autotest_common.sh@905 -- # local force 00:06:37.261 18:01:34 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:37.261 18:01:34 -- common/autotest_common.sh@908 -- # force=-F 00:06:37.261 18:01:34 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:37.261 mke2fs 1.46.5 (30-Dec-2021) 00:06:37.261 Discarding device blocks: 0/522240 done 00:06:37.261 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:37.261 Filesystem UUID: b2399b15-9bf9-48e7-8967-b66ffb1afdf8 00:06:37.261 Superblock backups stored on blocks: 00:06:37.261 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:37.261 00:06:37.261 Allocating group tables: 0/64 done 00:06:37.261 Writing inode tables: 0/64 done 00:06:37.261 Creating journal (8192 blocks): done 00:06:37.261 Writing superblocks and filesystem accounting information: 0/64 done 00:06:37.261 00:06:37.261 18:01:34 -- common/autotest_common.sh@921 -- # return 0 00:06:37.261 18:01:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.261 18:01:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.261 18:01:35 -- target/filesystem.sh@25 -- # sync 00:06:37.520 18:01:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.520 18:01:35 -- target/filesystem.sh@27 -- # sync 00:06:37.520 18:01:35 -- target/filesystem.sh@29 -- # i=0 00:06:37.520 18:01:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.520 18:01:35 -- target/filesystem.sh@37 -- # kill -0 60478 00:06:37.520 18:01:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.520 18:01:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.520 18:01:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.520 18:01:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.520 00:06:37.520 real 0m0.384s 00:06:37.520 user 0m0.020s 00:06:37.520 sys 0m0.064s 00:06:37.520 ************************************ 00:06:37.520 END TEST filesystem_ext4 00:06:37.520 ************************************ 00:06:37.520 18:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.520 18:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:37.520 18:01:35 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:37.520 18:01:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.520 18:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.520 18:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:37.520 ************************************ 00:06:37.520 START TEST filesystem_btrfs 00:06:37.520 ************************************ 00:06:37.520 18:01:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:37.520 18:01:35 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:37.520 18:01:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.520 18:01:35 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:37.520 18:01:35 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:37.520 18:01:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:37.520 18:01:35 -- common/autotest_common.sh@904 -- # local i=0 00:06:37.520 18:01:35 -- common/autotest_common.sh@905 -- # local force 00:06:37.520 18:01:35 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:37.520 18:01:35 -- common/autotest_common.sh@910 -- # force=-f 00:06:37.520 18:01:35 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:37.520 btrfs-progs v6.6.2 00:06:37.520 See https://btrfs.readthedocs.io for more information. 00:06:37.520 00:06:37.520 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:37.520 NOTE: several default settings have changed in version 5.15, please make sure 00:06:37.520 this does not affect your deployments: 00:06:37.520 - DUP for metadata (-m dup) 00:06:37.520 - enabled no-holes (-O no-holes) 00:06:37.520 - enabled free-space-tree (-R free-space-tree) 00:06:37.520 00:06:37.520 Label: (null) 00:06:37.520 UUID: ef55f449-df79-4de6-98ca-3d138767ad35 00:06:37.520 Node size: 16384 00:06:37.520 Sector size: 4096 00:06:37.520 Filesystem size: 510.00MiB 00:06:37.520 Block group profiles: 00:06:37.520 Data: single 8.00MiB 00:06:37.520 Metadata: DUP 32.00MiB 00:06:37.520 System: DUP 8.00MiB 00:06:37.520 SSD detected: yes 00:06:37.520 Zoned device: no 00:06:37.520 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:37.520 Runtime features: free-space-tree 00:06:37.520 Checksum: crc32c 00:06:37.520 Number of devices: 1 00:06:37.520 Devices: 00:06:37.520 ID SIZE PATH 00:06:37.520 1 510.00MiB /dev/nvme0n1p1 00:06:37.520 00:06:37.520 18:01:35 -- common/autotest_common.sh@921 -- # return 0 00:06:37.520 18:01:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.779 18:01:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.779 18:01:35 -- target/filesystem.sh@25 -- # sync 00:06:37.779 18:01:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.779 18:01:35 -- target/filesystem.sh@27 -- # sync 00:06:37.779 18:01:35 -- target/filesystem.sh@29 -- # i=0 00:06:37.779 18:01:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.779 18:01:35 -- target/filesystem.sh@37 -- # kill -0 60478 00:06:37.779 18:01:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.779 18:01:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.779 18:01:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.779 18:01:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.779 00:06:37.779 real 0m0.261s 00:06:37.779 user 0m0.025s 00:06:37.779 sys 0m0.070s 00:06:37.779 18:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.779 18:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:37.779 ************************************ 00:06:37.779 END TEST filesystem_btrfs 00:06:37.779 ************************************ 00:06:37.779 18:01:35 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:37.779 18:01:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.779 18:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.779 18:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:37.779 ************************************ 00:06:37.779 START TEST filesystem_xfs 00:06:37.779 ************************************ 00:06:37.779 18:01:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:37.779 18:01:35 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:37.779 18:01:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.779 18:01:35 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:37.779 18:01:35 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:37.779 18:01:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:37.779 18:01:35 -- common/autotest_common.sh@904 -- # local i=0 00:06:37.779 18:01:35 -- common/autotest_common.sh@905 -- # local force 00:06:37.779 18:01:35 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:37.779 18:01:35 -- common/autotest_common.sh@910 -- # force=-f 00:06:37.779 18:01:35 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:37.779 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:37.779 = sectsz=512 attr=2, projid32bit=1 00:06:37.779 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:37.779 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:37.779 data = bsize=4096 blocks=130560, imaxpct=25 00:06:37.779 = sunit=0 swidth=0 blks 00:06:37.779 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:37.779 log =internal log bsize=4096 blocks=16384, version=2 00:06:37.779 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:37.779 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:38.716 Discarding blocks...Done. 00:06:38.716 18:01:36 -- common/autotest_common.sh@921 -- # return 0 00:06:38.716 18:01:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.299 18:01:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.299 18:01:38 -- target/filesystem.sh@25 -- # sync 00:06:41.299 18:01:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.299 18:01:38 -- target/filesystem.sh@27 -- # sync 00:06:41.299 18:01:38 -- target/filesystem.sh@29 -- # i=0 00:06:41.299 18:01:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.299 18:01:38 -- target/filesystem.sh@37 -- # kill -0 60478 00:06:41.299 18:01:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.299 18:01:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.299 18:01:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.299 18:01:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.299 ************************************ 00:06:41.299 END TEST filesystem_xfs 00:06:41.299 ************************************ 00:06:41.299 00:06:41.299 real 0m3.122s 00:06:41.299 user 0m0.017s 00:06:41.299 sys 0m0.061s 00:06:41.299 18:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.299 18:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:41.299 18:01:38 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:41.299 18:01:38 -- target/filesystem.sh@93 -- # sync 00:06:41.299 18:01:38 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:41.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.299 18:01:38 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:41.299 18:01:38 -- common/autotest_common.sh@1198 -- # local i=0 00:06:41.299 18:01:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:41.299 18:01:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.299 18:01:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:41.299 18:01:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.299 18:01:38 -- common/autotest_common.sh@1210 -- # return 0 00:06:41.299 18:01:38 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:41.299 18:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.299 18:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:41.299 18:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.299 18:01:38 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:41.299 18:01:38 -- target/filesystem.sh@101 -- # killprocess 60478 00:06:41.299 18:01:38 -- common/autotest_common.sh@926 -- # '[' -z 60478 ']' 00:06:41.299 18:01:38 -- common/autotest_common.sh@930 -- # kill -0 60478 00:06:41.299 18:01:38 -- common/autotest_common.sh@931 -- # uname 00:06:41.299 18:01:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.299 18:01:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60478 00:06:41.299 18:01:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.299 18:01:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.299 killing process with pid 60478 00:06:41.299 18:01:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60478' 00:06:41.299 18:01:38 -- common/autotest_common.sh@945 -- # kill 60478 00:06:41.299 18:01:38 -- common/autotest_common.sh@950 -- # wait 60478 00:06:41.866 18:01:39 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:41.866 00:06:41.866 real 0m9.658s 00:06:41.866 user 0m36.371s 00:06:41.866 sys 0m1.570s 00:06:41.866 18:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.866 ************************************ 00:06:41.866 END TEST nvmf_filesystem_no_in_capsule 00:06:41.866 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.866 ************************************ 00:06:41.866 18:01:39 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:41.866 18:01:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:41.866 18:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.866 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.866 ************************************ 00:06:41.866 START TEST nvmf_filesystem_in_capsule 00:06:41.866 ************************************ 00:06:41.866 18:01:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:41.866 18:01:39 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:41.866 18:01:39 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:41.866 18:01:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:41.866 18:01:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:41.866 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.866 18:01:39 -- nvmf/common.sh@469 -- # nvmfpid=60798 00:06:41.866 18:01:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:41.866 18:01:39 -- nvmf/common.sh@470 -- # waitforlisten 60798 00:06:41.866 18:01:39 -- common/autotest_common.sh@819 -- # '[' -z 60798 ']' 00:06:41.866 18:01:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.866 18:01:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.867 18:01:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.867 18:01:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.867 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.867 [2024-04-25 18:01:39.641236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:41.867 [2024-04-25 18:01:39.641343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.867 [2024-04-25 18:01:39.778084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.125 [2024-04-25 18:01:39.900805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.125 [2024-04-25 18:01:39.900985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.125 [2024-04-25 18:01:39.900998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.125 [2024-04-25 18:01:39.901007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.125 [2024-04-25 18:01:39.901167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.125 [2024-04-25 18:01:39.901234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.125 [2024-04-25 18:01:39.901334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.125 [2024-04-25 18:01:39.901341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.693 18:01:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.693 18:01:40 -- common/autotest_common.sh@852 -- # return 0 00:06:42.693 18:01:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:42.693 18:01:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:42.693 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.693 18:01:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.693 18:01:40 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:42.952 18:01:40 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 [2024-04-25 18:01:40.630867] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 [2024-04-25 18:01:40.814143] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:42.952 18:01:40 -- common/autotest_common.sh@1359 -- # local bs 00:06:42.952 18:01:40 -- common/autotest_common.sh@1360 -- # local nb 00:06:42.952 18:01:40 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:42.952 18:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.952 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.952 18:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.952 18:01:40 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:42.952 { 00:06:42.952 "aliases": [ 00:06:42.952 "e4b7f6a5-b472-4670-889f-f6eb00b46d94" 00:06:42.952 ], 00:06:42.952 "assigned_rate_limits": { 00:06:42.952 "r_mbytes_per_sec": 0, 00:06:42.952 "rw_ios_per_sec": 0, 00:06:42.952 "rw_mbytes_per_sec": 0, 00:06:42.952 "w_mbytes_per_sec": 0 00:06:42.952 }, 00:06:42.952 "block_size": 512, 00:06:42.952 "claim_type": "exclusive_write", 00:06:42.952 "claimed": true, 00:06:42.952 "driver_specific": {}, 00:06:42.952 "memory_domains": [ 00:06:42.952 { 00:06:42.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.952 "dma_device_type": 2 00:06:42.952 } 00:06:42.952 ], 00:06:42.952 "name": "Malloc1", 00:06:42.952 "num_blocks": 1048576, 00:06:42.952 "product_name": "Malloc disk", 00:06:42.952 "supported_io_types": { 00:06:42.952 "abort": true, 00:06:42.952 "compare": false, 00:06:42.952 "compare_and_write": false, 00:06:42.952 "flush": true, 00:06:42.952 "nvme_admin": false, 00:06:42.952 "nvme_io": false, 00:06:42.952 "read": true, 00:06:42.952 "reset": true, 00:06:42.952 "unmap": true, 00:06:42.952 "write": true, 00:06:42.952 "write_zeroes": true 00:06:42.952 }, 00:06:42.952 "uuid": "e4b7f6a5-b472-4670-889f-f6eb00b46d94", 00:06:42.952 "zoned": false 00:06:42.952 } 00:06:42.952 ]' 00:06:42.952 18:01:40 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:43.211 18:01:40 -- common/autotest_common.sh@1362 -- # bs=512 00:06:43.211 18:01:40 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:43.211 18:01:40 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:43.211 18:01:40 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:43.211 18:01:40 -- common/autotest_common.sh@1367 -- # echo 512 00:06:43.211 18:01:40 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:43.211 18:01:40 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:43.212 18:01:41 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:43.212 18:01:41 -- common/autotest_common.sh@1177 -- # local i=0 00:06:43.212 18:01:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:43.212 18:01:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:43.212 18:01:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:45.745 18:01:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:45.745 18:01:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:45.745 18:01:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:45.745 18:01:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:45.745 18:01:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:45.745 18:01:43 -- common/autotest_common.sh@1187 -- # return 0 00:06:45.745 18:01:43 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:45.745 18:01:43 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:45.745 18:01:43 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:45.745 18:01:43 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:45.745 18:01:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:45.745 18:01:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:45.745 18:01:43 -- setup/common.sh@80 -- # echo 536870912 00:06:45.745 18:01:43 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:45.745 18:01:43 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:45.745 18:01:43 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:45.745 18:01:43 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:45.745 18:01:43 -- target/filesystem.sh@69 -- # partprobe 00:06:45.745 18:01:43 -- target/filesystem.sh@70 -- # sleep 1 00:06:46.312 18:01:44 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:46.312 18:01:44 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:46.312 18:01:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:46.312 18:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.312 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.580 ************************************ 00:06:46.580 START TEST filesystem_in_capsule_ext4 00:06:46.580 ************************************ 00:06:46.580 18:01:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:46.580 18:01:44 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:46.580 18:01:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.580 18:01:44 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:46.580 18:01:44 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:46.580 18:01:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:46.580 18:01:44 -- common/autotest_common.sh@904 -- # local i=0 00:06:46.580 18:01:44 -- common/autotest_common.sh@905 -- # local force 00:06:46.580 18:01:44 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:46.580 18:01:44 -- common/autotest_common.sh@908 -- # force=-F 00:06:46.580 18:01:44 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:46.580 mke2fs 1.46.5 (30-Dec-2021) 00:06:46.580 Discarding device blocks: 0/522240 done 00:06:46.580 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:46.580 Filesystem UUID: a2bdab3b-c4ce-4d05-9ccd-09c6054f8efc 00:06:46.580 Superblock backups stored on blocks: 00:06:46.580 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:46.580 00:06:46.580 Allocating group tables: 0/64 done 00:06:46.580 Writing inode tables: 0/64 done 00:06:46.580 Creating journal (8192 blocks): done 00:06:46.580 Writing superblocks and filesystem accounting information: 0/64 done 00:06:46.580 00:06:46.580 18:01:44 -- common/autotest_common.sh@921 -- # return 0 00:06:46.580 18:01:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.580 18:01:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.845 18:01:44 -- target/filesystem.sh@25 -- # sync 00:06:46.845 18:01:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.845 18:01:44 -- target/filesystem.sh@27 -- # sync 00:06:46.845 18:01:44 -- target/filesystem.sh@29 -- # i=0 00:06:46.845 18:01:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.845 18:01:44 -- target/filesystem.sh@37 -- # kill -0 60798 00:06:46.845 18:01:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.845 18:01:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.845 18:01:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.845 18:01:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.845 00:06:46.845 real 0m0.360s 00:06:46.845 user 0m0.021s 00:06:46.845 sys 0m0.057s 00:06:46.845 18:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.845 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.845 ************************************ 00:06:46.845 END TEST filesystem_in_capsule_ext4 00:06:46.845 ************************************ 00:06:46.845 18:01:44 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:46.845 18:01:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:46.845 18:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.845 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.845 ************************************ 00:06:46.845 START TEST filesystem_in_capsule_btrfs 00:06:46.845 ************************************ 00:06:46.845 18:01:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:46.845 18:01:44 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:46.845 18:01:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.845 18:01:44 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:46.845 18:01:44 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:46.845 18:01:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:46.845 18:01:44 -- common/autotest_common.sh@904 -- # local i=0 00:06:46.845 18:01:44 -- common/autotest_common.sh@905 -- # local force 00:06:46.845 18:01:44 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:46.845 18:01:44 -- common/autotest_common.sh@910 -- # force=-f 00:06:46.845 18:01:44 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:47.105 btrfs-progs v6.6.2 00:06:47.105 See https://btrfs.readthedocs.io for more information. 00:06:47.105 00:06:47.105 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:47.105 NOTE: several default settings have changed in version 5.15, please make sure 00:06:47.105 this does not affect your deployments: 00:06:47.105 - DUP for metadata (-m dup) 00:06:47.105 - enabled no-holes (-O no-holes) 00:06:47.105 - enabled free-space-tree (-R free-space-tree) 00:06:47.105 00:06:47.105 Label: (null) 00:06:47.105 UUID: 3e8b87cb-ae91-49cc-952e-e7430391e500 00:06:47.105 Node size: 16384 00:06:47.105 Sector size: 4096 00:06:47.105 Filesystem size: 510.00MiB 00:06:47.105 Block group profiles: 00:06:47.105 Data: single 8.00MiB 00:06:47.105 Metadata: DUP 32.00MiB 00:06:47.105 System: DUP 8.00MiB 00:06:47.105 SSD detected: yes 00:06:47.105 Zoned device: no 00:06:47.105 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:47.105 Runtime features: free-space-tree 00:06:47.105 Checksum: crc32c 00:06:47.105 Number of devices: 1 00:06:47.105 Devices: 00:06:47.105 ID SIZE PATH 00:06:47.105 1 510.00MiB /dev/nvme0n1p1 00:06:47.105 00:06:47.105 18:01:44 -- common/autotest_common.sh@921 -- # return 0 00:06:47.105 18:01:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.105 18:01:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.105 18:01:44 -- target/filesystem.sh@25 -- # sync 00:06:47.105 18:01:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.105 18:01:44 -- target/filesystem.sh@27 -- # sync 00:06:47.105 18:01:44 -- target/filesystem.sh@29 -- # i=0 00:06:47.105 18:01:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.105 18:01:44 -- target/filesystem.sh@37 -- # kill -0 60798 00:06:47.105 18:01:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.105 18:01:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.105 18:01:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.105 18:01:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.105 00:06:47.105 real 0m0.219s 00:06:47.105 user 0m0.020s 00:06:47.105 sys 0m0.065s 00:06:47.105 18:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.105 ************************************ 00:06:47.105 END TEST filesystem_in_capsule_btrfs 00:06:47.105 ************************************ 00:06:47.105 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.105 18:01:44 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.105 18:01:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:47.105 18:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.105 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.105 ************************************ 00:06:47.105 START TEST filesystem_in_capsule_xfs 00:06:47.105 ************************************ 00:06:47.105 18:01:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.105 18:01:44 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.105 18:01:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.105 18:01:44 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.105 18:01:44 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:47.105 18:01:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:47.105 18:01:44 -- common/autotest_common.sh@904 -- # local i=0 00:06:47.105 18:01:44 -- common/autotest_common.sh@905 -- # local force 00:06:47.105 18:01:44 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:47.105 18:01:44 -- common/autotest_common.sh@910 -- # force=-f 00:06:47.105 18:01:44 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.105 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.105 = sectsz=512 attr=2, projid32bit=1 00:06:47.105 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.105 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.105 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.105 = sunit=0 swidth=0 blks 00:06:47.105 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.105 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.105 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.105 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.041 Discarding blocks...Done. 00:06:48.041 18:01:45 -- common/autotest_common.sh@921 -- # return 0 00:06:48.041 18:01:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.945 18:01:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.945 18:01:47 -- target/filesystem.sh@25 -- # sync 00:06:49.945 18:01:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.945 18:01:47 -- target/filesystem.sh@27 -- # sync 00:06:49.945 18:01:47 -- target/filesystem.sh@29 -- # i=0 00:06:49.945 18:01:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.945 18:01:47 -- target/filesystem.sh@37 -- # kill -0 60798 00:06:49.945 18:01:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.945 18:01:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.945 18:01:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.945 18:01:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.945 00:06:49.945 real 0m2.620s 00:06:49.945 user 0m0.017s 00:06:49.945 sys 0m0.063s 00:06:49.945 18:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.945 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.945 ************************************ 00:06:49.945 END TEST filesystem_in_capsule_xfs 00:06:49.945 ************************************ 00:06:49.945 18:01:47 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:49.945 18:01:47 -- target/filesystem.sh@93 -- # sync 00:06:49.945 18:01:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:49.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:49.945 18:01:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:49.945 18:01:47 -- common/autotest_common.sh@1198 -- # local i=0 00:06:49.945 18:01:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:49.945 18:01:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:49.945 18:01:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:49.945 18:01:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:49.945 18:01:47 -- common/autotest_common.sh@1210 -- # return 0 00:06:49.945 18:01:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:49.945 18:01:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.945 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.945 18:01:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.945 18:01:47 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:49.945 18:01:47 -- target/filesystem.sh@101 -- # killprocess 60798 00:06:49.945 18:01:47 -- common/autotest_common.sh@926 -- # '[' -z 60798 ']' 00:06:49.945 18:01:47 -- common/autotest_common.sh@930 -- # kill -0 60798 00:06:49.945 18:01:47 -- common/autotest_common.sh@931 -- # uname 00:06:49.945 18:01:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.945 18:01:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60798 00:06:49.945 18:01:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.945 18:01:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.945 18:01:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60798' 00:06:49.945 killing process with pid 60798 00:06:49.945 18:01:47 -- common/autotest_common.sh@945 -- # kill 60798 00:06:49.945 18:01:47 -- common/autotest_common.sh@950 -- # wait 60798 00:06:50.514 18:01:48 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:50.514 00:06:50.514 real 0m8.584s 00:06:50.514 user 0m32.144s 00:06:50.514 sys 0m1.624s 00:06:50.514 18:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.514 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 ************************************ 00:06:50.514 END TEST nvmf_filesystem_in_capsule 00:06:50.514 ************************************ 00:06:50.514 18:01:48 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:50.514 18:01:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:50.514 18:01:48 -- nvmf/common.sh@116 -- # sync 00:06:50.514 18:01:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:50.514 18:01:48 -- nvmf/common.sh@119 -- # set +e 00:06:50.514 18:01:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:50.514 18:01:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:50.514 rmmod nvme_tcp 00:06:50.514 rmmod nvme_fabrics 00:06:50.514 rmmod nvme_keyring 00:06:50.514 18:01:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:50.514 18:01:48 -- nvmf/common.sh@123 -- # set -e 00:06:50.514 18:01:48 -- nvmf/common.sh@124 -- # return 0 00:06:50.514 18:01:48 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:50.514 18:01:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:50.514 18:01:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:50.514 18:01:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:50.514 18:01:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.514 18:01:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:50.514 18:01:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.514 18:01:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.514 18:01:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.514 18:01:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:50.514 00:06:50.514 real 0m19.112s 00:06:50.514 user 1m8.765s 00:06:50.514 sys 0m3.610s 00:06:50.514 18:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.514 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 ************************************ 00:06:50.514 END TEST nvmf_filesystem 00:06:50.514 ************************************ 00:06:50.514 18:01:48 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.514 18:01:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:50.514 18:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.514 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 ************************************ 00:06:50.514 START TEST nvmf_discovery 00:06:50.514 ************************************ 00:06:50.514 18:01:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:50.772 * Looking for test storage... 00:06:50.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.772 18:01:48 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.772 18:01:48 -- nvmf/common.sh@7 -- # uname -s 00:06:50.772 18:01:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.772 18:01:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.772 18:01:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.772 18:01:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.772 18:01:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.772 18:01:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.772 18:01:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.772 18:01:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.772 18:01:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.772 18:01:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.772 18:01:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:50.772 18:01:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:50.772 18:01:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.772 18:01:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.772 18:01:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.772 18:01:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.772 18:01:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.772 18:01:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.772 18:01:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.772 18:01:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.772 18:01:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.772 18:01:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.772 18:01:48 -- paths/export.sh@5 -- # export PATH 00:06:50.772 18:01:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.772 18:01:48 -- nvmf/common.sh@46 -- # : 0 00:06:50.772 18:01:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:50.773 18:01:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:50.773 18:01:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:50.773 18:01:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.773 18:01:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.773 18:01:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:50.773 18:01:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:50.773 18:01:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:50.773 18:01:48 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:50.773 18:01:48 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:50.773 18:01:48 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:50.773 18:01:48 -- target/discovery.sh@15 -- # hash nvme 00:06:50.773 18:01:48 -- target/discovery.sh@20 -- # nvmftestinit 00:06:50.773 18:01:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:50.773 18:01:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.773 18:01:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:50.773 18:01:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:50.773 18:01:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:50.773 18:01:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.773 18:01:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.773 18:01:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.773 18:01:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:50.773 18:01:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:50.773 18:01:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:50.773 18:01:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:50.773 18:01:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:50.773 18:01:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:50.773 18:01:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.773 18:01:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.773 18:01:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:50.773 18:01:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:50.773 18:01:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:50.773 18:01:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:50.773 18:01:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:50.773 18:01:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.773 18:01:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:50.773 18:01:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:50.773 18:01:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:50.773 18:01:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:50.773 18:01:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:50.773 18:01:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:50.773 Cannot find device "nvmf_tgt_br" 00:06:50.773 18:01:48 -- nvmf/common.sh@154 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:50.773 Cannot find device "nvmf_tgt_br2" 00:06:50.773 18:01:48 -- nvmf/common.sh@155 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:50.773 18:01:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:50.773 Cannot find device "nvmf_tgt_br" 00:06:50.773 18:01:48 -- nvmf/common.sh@157 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:50.773 Cannot find device "nvmf_tgt_br2" 00:06:50.773 18:01:48 -- nvmf/common.sh@158 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:50.773 18:01:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:50.773 18:01:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:50.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.773 18:01:48 -- nvmf/common.sh@161 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:50.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.773 18:01:48 -- nvmf/common.sh@162 -- # true 00:06:50.773 18:01:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:50.773 18:01:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:50.773 18:01:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:50.773 18:01:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:50.773 18:01:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.773 18:01:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.773 18:01:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.773 18:01:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:51.031 18:01:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:51.031 18:01:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:51.031 18:01:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:51.031 18:01:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:51.031 18:01:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:51.031 18:01:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:51.031 18:01:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:51.031 18:01:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:51.031 18:01:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:51.031 18:01:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:51.031 18:01:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:51.031 18:01:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:51.031 18:01:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:51.031 18:01:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:51.031 18:01:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:51.031 18:01:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:51.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:06:51.031 00:06:51.031 --- 10.0.0.2 ping statistics --- 00:06:51.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.031 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:51.031 18:01:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:51.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:51.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:06:51.031 00:06:51.031 --- 10.0.0.3 ping statistics --- 00:06:51.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.031 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:06:51.031 18:01:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:51.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:51.031 00:06:51.031 --- 10.0.0.1 ping statistics --- 00:06:51.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.031 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:51.031 18:01:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.031 18:01:48 -- nvmf/common.sh@421 -- # return 0 00:06:51.031 18:01:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:51.031 18:01:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.031 18:01:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:51.031 18:01:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:51.031 18:01:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.031 18:01:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:51.031 18:01:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:51.031 18:01:48 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:51.031 18:01:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:51.031 18:01:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:51.031 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 18:01:48 -- nvmf/common.sh@469 -- # nvmfpid=61249 00:06:51.032 18:01:48 -- nvmf/common.sh@470 -- # waitforlisten 61249 00:06:51.032 18:01:48 -- common/autotest_common.sh@819 -- # '[' -z 61249 ']' 00:06:51.032 18:01:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.032 18:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.032 18:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.032 18:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.032 18:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.032 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.032 [2024-04-25 18:01:48.889156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:51.032 [2024-04-25 18:01:48.889244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.290 [2024-04-25 18:01:49.027366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.290 [2024-04-25 18:01:49.127074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.290 [2024-04-25 18:01:49.127416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.290 [2024-04-25 18:01:49.127526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.290 [2024-04-25 18:01:49.127578] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.290 [2024-04-25 18:01:49.127813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.290 [2024-04-25 18:01:49.128474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.290 [2024-04-25 18:01:49.128607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.290 [2024-04-25 18:01:49.128611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.227 18:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.227 18:01:49 -- common/autotest_common.sh@852 -- # return 0 00:06:52.227 18:01:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:52.227 18:01:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.227 18:01:49 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 [2024-04-25 18:01:49.928507] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.227 18:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:49 -- target/discovery.sh@26 -- # seq 1 4 00:06:52.227 18:01:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.227 18:01:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 Null1 00:06:52.227 18:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 [2024-04-25 18:01:49.988931] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.227 18:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.227 18:01:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:52.227 18:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 Null2 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.227 18:01:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 Null3 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:52.227 18:01:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 Null4 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.227 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.227 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.227 18:01:50 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 4420 00:06:52.486 00:06:52.486 Discovery Log Number of Records 6, Generation counter 6 00:06:52.486 =====Discovery Log Entry 0====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: current discovery subsystem 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4420 00:06:52.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: explicit discovery connections, duplicate discovery information 00:06:52.486 sectype: none 00:06:52.486 =====Discovery Log Entry 1====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: nvme subsystem 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4420 00:06:52.486 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: none 00:06:52.486 sectype: none 00:06:52.486 =====Discovery Log Entry 2====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: nvme subsystem 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4420 00:06:52.486 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: none 00:06:52.486 sectype: none 00:06:52.486 =====Discovery Log Entry 3====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: nvme subsystem 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4420 00:06:52.486 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: none 00:06:52.486 sectype: none 00:06:52.486 =====Discovery Log Entry 4====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: nvme subsystem 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4420 00:06:52.486 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: none 00:06:52.486 sectype: none 00:06:52.486 =====Discovery Log Entry 5====== 00:06:52.486 trtype: tcp 00:06:52.486 adrfam: ipv4 00:06:52.486 subtype: discovery subsystem referral 00:06:52.486 treq: not required 00:06:52.486 portid: 0 00:06:52.486 trsvcid: 4430 00:06:52.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.486 traddr: 10.0.0.2 00:06:52.486 eflags: none 00:06:52.486 sectype: none 00:06:52.486 Perform nvmf subsystem discovery via RPC 00:06:52.486 18:01:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:52.486 18:01:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:52.486 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.486 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.486 [2024-04-25 18:01:50.180913] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:52.486 [ 00:06:52.486 { 00:06:52.486 "allow_any_host": true, 00:06:52.486 "hosts": [], 00:06:52.486 "listen_addresses": [ 00:06:52.486 { 00:06:52.486 "adrfam": "IPv4", 00:06:52.486 "traddr": "10.0.0.2", 00:06:52.486 "transport": "TCP", 00:06:52.486 "trsvcid": "4420", 00:06:52.486 "trtype": "TCP" 00:06:52.486 } 00:06:52.486 ], 00:06:52.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:52.486 "subtype": "Discovery" 00:06:52.486 }, 00:06:52.486 { 00:06:52.486 "allow_any_host": true, 00:06:52.486 "hosts": [], 00:06:52.486 "listen_addresses": [ 00:06:52.486 { 00:06:52.486 "adrfam": "IPv4", 00:06:52.486 "traddr": "10.0.0.2", 00:06:52.486 "transport": "TCP", 00:06:52.486 "trsvcid": "4420", 00:06:52.486 "trtype": "TCP" 00:06:52.486 } 00:06:52.486 ], 00:06:52.486 "max_cntlid": 65519, 00:06:52.487 "max_namespaces": 32, 00:06:52.487 "min_cntlid": 1, 00:06:52.487 "model_number": "SPDK bdev Controller", 00:06:52.487 "namespaces": [ 00:06:52.487 { 00:06:52.487 "bdev_name": "Null1", 00:06:52.487 "name": "Null1", 00:06:52.487 "nguid": "3C49BFF6F687411090AEC78E1747C3EF", 00:06:52.487 "nsid": 1, 00:06:52.487 "uuid": "3c49bff6-f687-4110-90ae-c78e1747c3ef" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:52.487 "serial_number": "SPDK00000000000001", 00:06:52.487 "subtype": "NVMe" 00:06:52.487 }, 00:06:52.487 { 00:06:52.487 "allow_any_host": true, 00:06:52.487 "hosts": [], 00:06:52.487 "listen_addresses": [ 00:06:52.487 { 00:06:52.487 "adrfam": "IPv4", 00:06:52.487 "traddr": "10.0.0.2", 00:06:52.487 "transport": "TCP", 00:06:52.487 "trsvcid": "4420", 00:06:52.487 "trtype": "TCP" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "max_cntlid": 65519, 00:06:52.487 "max_namespaces": 32, 00:06:52.487 "min_cntlid": 1, 00:06:52.487 "model_number": "SPDK bdev Controller", 00:06:52.487 "namespaces": [ 00:06:52.487 { 00:06:52.487 "bdev_name": "Null2", 00:06:52.487 "name": "Null2", 00:06:52.487 "nguid": "344E5E761C7D4204A5D01550A6612C5A", 00:06:52.487 "nsid": 1, 00:06:52.487 "uuid": "344e5e76-1c7d-4204-a5d0-1550a6612c5a" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:52.487 "serial_number": "SPDK00000000000002", 00:06:52.487 "subtype": "NVMe" 00:06:52.487 }, 00:06:52.487 { 00:06:52.487 "allow_any_host": true, 00:06:52.487 "hosts": [], 00:06:52.487 "listen_addresses": [ 00:06:52.487 { 00:06:52.487 "adrfam": "IPv4", 00:06:52.487 "traddr": "10.0.0.2", 00:06:52.487 "transport": "TCP", 00:06:52.487 "trsvcid": "4420", 00:06:52.487 "trtype": "TCP" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "max_cntlid": 65519, 00:06:52.487 "max_namespaces": 32, 00:06:52.487 "min_cntlid": 1, 00:06:52.487 "model_number": "SPDK bdev Controller", 00:06:52.487 "namespaces": [ 00:06:52.487 { 00:06:52.487 "bdev_name": "Null3", 00:06:52.487 "name": "Null3", 00:06:52.487 "nguid": "00CCBD9B5D044D549E0936803E8D51E6", 00:06:52.487 "nsid": 1, 00:06:52.487 "uuid": "00ccbd9b-5d04-4d54-9e09-36803e8d51e6" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:52.487 "serial_number": "SPDK00000000000003", 00:06:52.487 "subtype": "NVMe" 00:06:52.487 }, 00:06:52.487 { 00:06:52.487 "allow_any_host": true, 00:06:52.487 "hosts": [], 00:06:52.487 "listen_addresses": [ 00:06:52.487 { 00:06:52.487 "adrfam": "IPv4", 00:06:52.487 "traddr": "10.0.0.2", 00:06:52.487 "transport": "TCP", 00:06:52.487 "trsvcid": "4420", 00:06:52.487 "trtype": "TCP" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "max_cntlid": 65519, 00:06:52.487 "max_namespaces": 32, 00:06:52.487 "min_cntlid": 1, 00:06:52.487 "model_number": "SPDK bdev Controller", 00:06:52.487 "namespaces": [ 00:06:52.487 { 00:06:52.487 "bdev_name": "Null4", 00:06:52.487 "name": "Null4", 00:06:52.487 "nguid": "CB4288D701CA4883B9A26FB5B6ED7158", 00:06:52.487 "nsid": 1, 00:06:52.487 "uuid": "cb4288d7-01ca-4883-b9a2-6fb5b6ed7158" 00:06:52.487 } 00:06:52.487 ], 00:06:52.487 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:52.487 "serial_number": "SPDK00000000000004", 00:06:52.487 "subtype": "NVMe" 00:06:52.487 } 00:06:52.487 ] 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@42 -- # seq 1 4 00:06:52.487 18:01:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.487 18:01:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.487 18:01:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.487 18:01:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.487 18:01:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:52.487 18:01:50 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:52.487 18:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.487 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.487 18:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.487 18:01:50 -- target/discovery.sh@49 -- # check_bdevs= 00:06:52.487 18:01:50 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:52.487 18:01:50 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:52.487 18:01:50 -- target/discovery.sh@57 -- # nvmftestfini 00:06:52.487 18:01:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:52.487 18:01:50 -- nvmf/common.sh@116 -- # sync 00:06:52.487 18:01:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:52.487 18:01:50 -- nvmf/common.sh@119 -- # set +e 00:06:52.487 18:01:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:52.487 18:01:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:52.487 rmmod nvme_tcp 00:06:52.487 rmmod nvme_fabrics 00:06:52.487 rmmod nvme_keyring 00:06:52.487 18:01:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:52.746 18:01:50 -- nvmf/common.sh@123 -- # set -e 00:06:52.746 18:01:50 -- nvmf/common.sh@124 -- # return 0 00:06:52.746 18:01:50 -- nvmf/common.sh@477 -- # '[' -n 61249 ']' 00:06:52.746 18:01:50 -- nvmf/common.sh@478 -- # killprocess 61249 00:06:52.746 18:01:50 -- common/autotest_common.sh@926 -- # '[' -z 61249 ']' 00:06:52.746 18:01:50 -- common/autotest_common.sh@930 -- # kill -0 61249 00:06:52.746 18:01:50 -- common/autotest_common.sh@931 -- # uname 00:06:52.746 18:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.746 18:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61249 00:06:52.746 18:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.746 18:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.746 killing process with pid 61249 00:06:52.746 18:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61249' 00:06:52.746 18:01:50 -- common/autotest_common.sh@945 -- # kill 61249 00:06:52.746 [2024-04-25 18:01:50.448922] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:52.746 18:01:50 -- common/autotest_common.sh@950 -- # wait 61249 00:06:53.005 18:01:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:53.005 18:01:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:53.005 18:01:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:53.005 18:01:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:53.005 18:01:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:53.005 18:01:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.005 18:01:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.005 18:01:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.005 18:01:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:53.005 00:06:53.005 real 0m2.356s 00:06:53.005 user 0m6.445s 00:06:53.005 sys 0m0.623s 00:06:53.005 18:01:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.005 ************************************ 00:06:53.005 END TEST nvmf_discovery 00:06:53.005 ************************************ 00:06:53.005 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:53.005 18:01:50 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:53.005 18:01:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:53.005 18:01:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.005 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:53.005 ************************************ 00:06:53.005 START TEST nvmf_referrals 00:06:53.005 ************************************ 00:06:53.005 18:01:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:53.005 * Looking for test storage... 00:06:53.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:53.005 18:01:50 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.005 18:01:50 -- nvmf/common.sh@7 -- # uname -s 00:06:53.005 18:01:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.005 18:01:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.005 18:01:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.005 18:01:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.005 18:01:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.005 18:01:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.005 18:01:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.005 18:01:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.005 18:01:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.005 18:01:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.005 18:01:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:53.005 18:01:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:53.005 18:01:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.005 18:01:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.005 18:01:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:53.005 18:01:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.005 18:01:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.005 18:01:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.005 18:01:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.005 18:01:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.005 18:01:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.005 18:01:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.005 18:01:50 -- paths/export.sh@5 -- # export PATH 00:06:53.005 18:01:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.005 18:01:50 -- nvmf/common.sh@46 -- # : 0 00:06:53.005 18:01:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:53.005 18:01:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:53.005 18:01:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:53.005 18:01:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.005 18:01:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.005 18:01:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:53.005 18:01:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:53.005 18:01:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:53.005 18:01:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:53.005 18:01:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:53.005 18:01:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:53.005 18:01:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:53.005 18:01:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:53.005 18:01:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:53.005 18:01:50 -- target/referrals.sh@37 -- # nvmftestinit 00:06:53.005 18:01:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:53.005 18:01:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.005 18:01:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:53.005 18:01:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:53.005 18:01:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:53.006 18:01:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.006 18:01:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.006 18:01:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.006 18:01:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:53.006 18:01:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:53.006 18:01:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:53.006 18:01:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:53.006 18:01:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:53.006 18:01:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:53.006 18:01:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.006 18:01:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.006 18:01:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:53.006 18:01:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:53.006 18:01:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:53.006 18:01:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:53.006 18:01:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:53.006 18:01:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.006 18:01:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:53.006 18:01:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:53.006 18:01:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:53.006 18:01:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:53.006 18:01:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:53.006 18:01:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:53.264 Cannot find device "nvmf_tgt_br" 00:06:53.264 18:01:50 -- nvmf/common.sh@154 -- # true 00:06:53.264 18:01:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:53.264 Cannot find device "nvmf_tgt_br2" 00:06:53.264 18:01:50 -- nvmf/common.sh@155 -- # true 00:06:53.264 18:01:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:53.264 18:01:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:53.264 Cannot find device "nvmf_tgt_br" 00:06:53.264 18:01:50 -- nvmf/common.sh@157 -- # true 00:06:53.264 18:01:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:53.264 Cannot find device "nvmf_tgt_br2" 00:06:53.264 18:01:50 -- nvmf/common.sh@158 -- # true 00:06:53.264 18:01:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:53.264 18:01:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:53.264 18:01:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:53.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.264 18:01:51 -- nvmf/common.sh@161 -- # true 00:06:53.264 18:01:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:53.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.264 18:01:51 -- nvmf/common.sh@162 -- # true 00:06:53.264 18:01:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:53.264 18:01:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:53.264 18:01:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:53.264 18:01:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:53.264 18:01:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:53.264 18:01:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:53.264 18:01:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:53.264 18:01:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:53.264 18:01:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:53.264 18:01:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:53.264 18:01:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:53.264 18:01:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:53.264 18:01:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:53.264 18:01:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:53.264 18:01:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:53.264 18:01:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:53.264 18:01:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:53.264 18:01:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:53.264 18:01:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:53.264 18:01:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:53.530 18:01:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:53.530 18:01:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:53.530 18:01:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:53.530 18:01:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:53.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:06:53.530 00:06:53.530 --- 10.0.0.2 ping statistics --- 00:06:53.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.530 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:53.530 18:01:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:53.530 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:53.530 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:06:53.530 00:06:53.530 --- 10.0.0.3 ping statistics --- 00:06:53.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.530 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:06:53.530 18:01:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:53.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:06:53.530 00:06:53.530 --- 10.0.0.1 ping statistics --- 00:06:53.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.530 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:06:53.530 18:01:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.530 18:01:51 -- nvmf/common.sh@421 -- # return 0 00:06:53.530 18:01:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:53.530 18:01:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.530 18:01:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:53.530 18:01:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:53.530 18:01:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.530 18:01:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:53.530 18:01:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:53.530 18:01:51 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:53.530 18:01:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:53.530 18:01:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.530 18:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.530 18:01:51 -- nvmf/common.sh@469 -- # nvmfpid=61476 00:06:53.530 18:01:51 -- nvmf/common.sh@470 -- # waitforlisten 61476 00:06:53.530 18:01:51 -- common/autotest_common.sh@819 -- # '[' -z 61476 ']' 00:06:53.530 18:01:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.530 18:01:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.530 18:01:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.530 18:01:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.530 18:01:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.530 18:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.530 [2024-04-25 18:01:51.317749] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:53.530 [2024-04-25 18:01:51.317851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.805 [2024-04-25 18:01:51.461424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.805 [2024-04-25 18:01:51.575015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.805 [2024-04-25 18:01:51.575174] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.805 [2024-04-25 18:01:51.575190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.805 [2024-04-25 18:01:51.575201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.805 [2024-04-25 18:01:51.575348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.805 [2024-04-25 18:01:51.575994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.805 [2024-04-25 18:01:51.576194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.805 [2024-04-25 18:01:51.576293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.372 18:01:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.373 18:01:52 -- common/autotest_common.sh@852 -- # return 0 00:06:54.373 18:01:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:54.373 18:01:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:54.373 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.373 18:01:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.373 18:01:52 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.373 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.373 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.373 [2024-04-25 18:01:52.297520] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 [2024-04-25 18:01:52.323528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.632 18:01:52 -- target/referrals.sh@48 -- # jq length 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:54.632 18:01:52 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:54.632 18:01:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:54.632 18:01:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.632 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.632 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.632 18:01:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:54.632 18:01:52 -- target/referrals.sh@21 -- # sort 00:06:54.632 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:54.632 18:01:52 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:54.632 18:01:52 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:54.632 18:01:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.632 18:01:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.632 18:01:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.632 18:01:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:54.632 18:01:52 -- target/referrals.sh@26 -- # sort 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:54.891 18:01:52 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- target/referrals.sh@56 -- # jq length 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:54.891 18:01:52 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:54.891 18:01:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.891 18:01:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # sort 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # echo 00:06:54.891 18:01:52 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:54.891 18:01:52 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:54.891 18:01:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:54.891 18:01:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:54.891 18:01:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:54.891 18:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.891 18:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.891 18:01:52 -- target/referrals.sh@21 -- # sort 00:06:54.891 18:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:54.891 18:01:52 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:54.891 18:01:52 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:54.891 18:01:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:54.891 18:01:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # sort 00:06:54.891 18:01:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:55.150 18:01:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:55.150 18:01:52 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:55.150 18:01:52 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:55.150 18:01:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:55.150 18:01:52 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:55.150 18:01:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.150 18:01:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:55.150 18:01:52 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:55.150 18:01:52 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:55.150 18:01:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:55.150 18:01:52 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:55.150 18:01:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.150 18:01:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:55.150 18:01:53 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:55.150 18:01:53 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:55.150 18:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.150 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.150 18:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.151 18:01:53 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:55.151 18:01:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:55.151 18:01:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:55.151 18:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.151 18:01:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:55.151 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.151 18:01:53 -- target/referrals.sh@21 -- # sort 00:06:55.151 18:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.151 18:01:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:55.151 18:01:53 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:55.151 18:01:53 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:55.151 18:01:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:55.151 18:01:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:55.151 18:01:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.151 18:01:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:55.151 18:01:53 -- target/referrals.sh@26 -- # sort 00:06:55.408 18:01:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:55.408 18:01:53 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:55.408 18:01:53 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:55.408 18:01:53 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:55.408 18:01:53 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:55.408 18:01:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.408 18:01:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:55.408 18:01:53 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:55.408 18:01:53 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:55.408 18:01:53 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:55.408 18:01:53 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:55.408 18:01:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:55.408 18:01:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.408 18:01:53 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:55.409 18:01:53 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:55.409 18:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.409 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.409 18:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.409 18:01:53 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:55.409 18:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.409 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.409 18:01:53 -- target/referrals.sh@82 -- # jq length 00:06:55.409 18:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.409 18:01:53 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:55.409 18:01:53 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:55.409 18:01:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:55.409 18:01:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:55.667 18:01:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:55.667 18:01:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:55.667 18:01:53 -- target/referrals.sh@26 -- # sort 00:06:55.667 18:01:53 -- target/referrals.sh@26 -- # echo 00:06:55.667 18:01:53 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:55.667 18:01:53 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:55.667 18:01:53 -- target/referrals.sh@86 -- # nvmftestfini 00:06:55.667 18:01:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:55.667 18:01:53 -- nvmf/common.sh@116 -- # sync 00:06:55.667 18:01:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:55.667 18:01:53 -- nvmf/common.sh@119 -- # set +e 00:06:55.667 18:01:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:55.667 18:01:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:55.667 rmmod nvme_tcp 00:06:55.667 rmmod nvme_fabrics 00:06:55.667 rmmod nvme_keyring 00:06:55.667 18:01:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:55.667 18:01:53 -- nvmf/common.sh@123 -- # set -e 00:06:55.667 18:01:53 -- nvmf/common.sh@124 -- # return 0 00:06:55.667 18:01:53 -- nvmf/common.sh@477 -- # '[' -n 61476 ']' 00:06:55.667 18:01:53 -- nvmf/common.sh@478 -- # killprocess 61476 00:06:55.667 18:01:53 -- common/autotest_common.sh@926 -- # '[' -z 61476 ']' 00:06:55.667 18:01:53 -- common/autotest_common.sh@930 -- # kill -0 61476 00:06:55.667 18:01:53 -- common/autotest_common.sh@931 -- # uname 00:06:55.667 18:01:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.667 18:01:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61476 00:06:55.667 18:01:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:55.667 18:01:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:55.667 killing process with pid 61476 00:06:55.667 18:01:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61476' 00:06:55.667 18:01:53 -- common/autotest_common.sh@945 -- # kill 61476 00:06:55.667 18:01:53 -- common/autotest_common.sh@950 -- # wait 61476 00:06:55.925 18:01:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:55.925 18:01:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:55.925 18:01:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:55.925 18:01:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.925 18:01:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:55.925 18:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.925 18:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.925 18:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.925 18:01:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:55.925 ************************************ 00:06:55.925 END TEST nvmf_referrals 00:06:55.925 ************************************ 00:06:55.925 00:06:55.925 real 0m3.034s 00:06:55.925 user 0m9.689s 00:06:55.925 sys 0m0.848s 00:06:55.925 18:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.925 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 18:01:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:56.184 18:01:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:56.184 18:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.184 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:56.184 ************************************ 00:06:56.184 START TEST nvmf_connect_disconnect 00:06:56.184 ************************************ 00:06:56.184 18:01:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:56.184 * Looking for test storage... 00:06:56.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:56.184 18:01:53 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.184 18:01:53 -- nvmf/common.sh@7 -- # uname -s 00:06:56.184 18:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.184 18:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.184 18:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.184 18:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.184 18:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.184 18:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.184 18:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.184 18:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.184 18:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.184 18:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.184 18:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:56.184 18:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:06:56.184 18:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.184 18:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.184 18:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:56.184 18:01:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.184 18:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.184 18:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.184 18:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.184 18:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.184 18:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.185 18:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.185 18:01:53 -- paths/export.sh@5 -- # export PATH 00:06:56.185 18:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.185 18:01:53 -- nvmf/common.sh@46 -- # : 0 00:06:56.185 18:01:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:56.185 18:01:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:56.185 18:01:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:56.185 18:01:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.185 18:01:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.185 18:01:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:56.185 18:01:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:56.185 18:01:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:56.185 18:01:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:56.185 18:01:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:56.185 18:01:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:56.185 18:01:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:56.185 18:01:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.185 18:01:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:56.185 18:01:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:56.185 18:01:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:56.185 18:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.185 18:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.185 18:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.185 18:01:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:56.185 18:01:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:56.185 18:01:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:56.185 18:01:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:56.185 18:01:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:56.185 18:01:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:56.185 18:01:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.185 18:01:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.185 18:01:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:56.185 18:01:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:56.185 18:01:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:56.185 18:01:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:56.185 18:01:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:56.185 18:01:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.185 18:01:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:56.185 18:01:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:56.185 18:01:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:56.185 18:01:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:56.185 18:01:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:56.185 18:01:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:56.185 Cannot find device "nvmf_tgt_br" 00:06:56.185 18:01:54 -- nvmf/common.sh@154 -- # true 00:06:56.185 18:01:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:56.185 Cannot find device "nvmf_tgt_br2" 00:06:56.185 18:01:54 -- nvmf/common.sh@155 -- # true 00:06:56.185 18:01:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:56.185 18:01:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:56.185 Cannot find device "nvmf_tgt_br" 00:06:56.185 18:01:54 -- nvmf/common.sh@157 -- # true 00:06:56.185 18:01:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:56.185 Cannot find device "nvmf_tgt_br2" 00:06:56.185 18:01:54 -- nvmf/common.sh@158 -- # true 00:06:56.185 18:01:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:56.185 18:01:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:56.185 18:01:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:56.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:56.443 18:01:54 -- nvmf/common.sh@161 -- # true 00:06:56.443 18:01:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:56.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:56.443 18:01:54 -- nvmf/common.sh@162 -- # true 00:06:56.443 18:01:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:56.443 18:01:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:56.443 18:01:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:56.443 18:01:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:56.443 18:01:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:56.443 18:01:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:56.443 18:01:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:56.443 18:01:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:56.443 18:01:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:56.443 18:01:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:56.443 18:01:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:56.443 18:01:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:56.443 18:01:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:56.443 18:01:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:56.443 18:01:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:56.443 18:01:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:56.443 18:01:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:56.443 18:01:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:56.443 18:01:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:56.443 18:01:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:56.443 18:01:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:56.444 18:01:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:56.444 18:01:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:56.444 18:01:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:56.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:06:56.444 00:06:56.444 --- 10.0.0.2 ping statistics --- 00:06:56.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.444 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:06:56.444 18:01:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:56.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:56.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:06:56.444 00:06:56.444 --- 10.0.0.3 ping statistics --- 00:06:56.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.444 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:06:56.444 18:01:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:56.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:56.444 00:06:56.444 --- 10.0.0.1 ping statistics --- 00:06:56.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.444 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:56.444 18:01:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.444 18:01:54 -- nvmf/common.sh@421 -- # return 0 00:06:56.444 18:01:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:56.444 18:01:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.444 18:01:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:56.444 18:01:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:56.444 18:01:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.444 18:01:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:56.444 18:01:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:56.444 18:01:54 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:56.444 18:01:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:56.444 18:01:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:56.444 18:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.444 18:01:54 -- nvmf/common.sh@469 -- # nvmfpid=61777 00:06:56.444 18:01:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:56.444 18:01:54 -- nvmf/common.sh@470 -- # waitforlisten 61777 00:06:56.444 18:01:54 -- common/autotest_common.sh@819 -- # '[' -z 61777 ']' 00:06:56.444 18:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.444 18:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.444 18:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.444 18:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.444 18:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.703 [2024-04-25 18:01:54.433615] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:56.703 [2024-04-25 18:01:54.433740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.703 [2024-04-25 18:01:54.569915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.962 [2024-04-25 18:01:54.666744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.962 [2024-04-25 18:01:54.666921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.962 [2024-04-25 18:01:54.666935] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.962 [2024-04-25 18:01:54.666944] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.962 [2024-04-25 18:01:54.667127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.962 [2024-04-25 18:01:54.667852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.962 [2024-04-25 18:01:54.668002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.962 [2024-04-25 18:01:54.668009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.549 18:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:57.549 18:01:55 -- common/autotest_common.sh@852 -- # return 0 00:06:57.549 18:01:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:57.549 18:01:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:57.549 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.549 18:01:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.549 18:01:55 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:57.549 18:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.549 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.549 [2024-04-25 18:01:55.479643] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.808 18:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:57.808 18:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.808 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.808 18:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:57.808 18:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.808 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.808 18:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:57.808 18:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.808 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.808 18:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.808 18:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.808 18:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.808 [2024-04-25 18:01:55.550142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.808 18:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:06:57.808 18:01:55 -- target/connect_disconnect.sh@34 -- # set +x 00:07:00.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.437 18:05:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:42.437 18:05:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:42.437 18:05:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:42.437 18:05:39 -- nvmf/common.sh@116 -- # sync 00:10:42.437 18:05:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:42.437 18:05:39 -- nvmf/common.sh@119 -- # set +e 00:10:42.437 18:05:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:42.437 18:05:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:42.437 rmmod nvme_tcp 00:10:42.437 rmmod nvme_fabrics 00:10:42.437 rmmod nvme_keyring 00:10:42.437 18:05:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:42.437 18:05:39 -- nvmf/common.sh@123 -- # set -e 00:10:42.437 18:05:39 -- nvmf/common.sh@124 -- # return 0 00:10:42.437 18:05:39 -- nvmf/common.sh@477 -- # '[' -n 61777 ']' 00:10:42.437 18:05:39 -- nvmf/common.sh@478 -- # killprocess 61777 00:10:42.437 18:05:39 -- common/autotest_common.sh@926 -- # '[' -z 61777 ']' 00:10:42.437 18:05:39 -- common/autotest_common.sh@930 -- # kill -0 61777 00:10:42.437 18:05:39 -- common/autotest_common.sh@931 -- # uname 00:10:42.437 18:05:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:42.437 18:05:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61777 00:10:42.437 18:05:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:42.437 18:05:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:42.437 killing process with pid 61777 00:10:42.437 18:05:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61777' 00:10:42.437 18:05:39 -- common/autotest_common.sh@945 -- # kill 61777 00:10:42.437 18:05:39 -- common/autotest_common.sh@950 -- # wait 61777 00:10:42.437 18:05:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:42.437 18:05:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:42.437 18:05:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:42.437 18:05:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.437 18:05:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:42.437 18:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.437 18:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.437 18:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.437 18:05:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:42.696 00:10:42.696 real 3m46.487s 00:10:42.696 user 14m42.065s 00:10:42.696 sys 0m21.991s 00:10:42.696 18:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.696 ************************************ 00:10:42.696 END TEST nvmf_connect_disconnect 00:10:42.696 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:42.696 ************************************ 00:10:42.696 18:05:40 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:42.696 18:05:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:42.696 18:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.696 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:42.696 ************************************ 00:10:42.696 START TEST nvmf_multitarget 00:10:42.696 ************************************ 00:10:42.696 18:05:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:42.696 * Looking for test storage... 00:10:42.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:42.696 18:05:40 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.696 18:05:40 -- nvmf/common.sh@7 -- # uname -s 00:10:42.696 18:05:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.696 18:05:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.696 18:05:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.696 18:05:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.696 18:05:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.696 18:05:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.696 18:05:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.696 18:05:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.696 18:05:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.696 18:05:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.696 18:05:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:42.696 18:05:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:42.696 18:05:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.697 18:05:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.697 18:05:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.697 18:05:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.697 18:05:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.697 18:05:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.697 18:05:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.697 18:05:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.697 18:05:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.697 18:05:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.697 18:05:40 -- paths/export.sh@5 -- # export PATH 00:10:42.697 18:05:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.697 18:05:40 -- nvmf/common.sh@46 -- # : 0 00:10:42.697 18:05:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:42.697 18:05:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:42.697 18:05:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:42.697 18:05:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.697 18:05:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.697 18:05:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:42.697 18:05:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:42.697 18:05:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:42.697 18:05:40 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:42.697 18:05:40 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:42.697 18:05:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:42.697 18:05:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.697 18:05:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:42.697 18:05:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:42.697 18:05:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:42.697 18:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.697 18:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.697 18:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.697 18:05:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:42.697 18:05:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:42.697 18:05:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:42.697 18:05:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:42.697 18:05:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:42.697 18:05:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:42.697 18:05:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.697 18:05:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.697 18:05:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:42.697 18:05:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:42.697 18:05:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.697 18:05:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.697 18:05:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.697 18:05:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.697 18:05:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.697 18:05:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.697 18:05:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.697 18:05:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.697 18:05:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:42.697 18:05:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:42.697 Cannot find device "nvmf_tgt_br" 00:10:42.697 18:05:40 -- nvmf/common.sh@154 -- # true 00:10:42.697 18:05:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.697 Cannot find device "nvmf_tgt_br2" 00:10:42.697 18:05:40 -- nvmf/common.sh@155 -- # true 00:10:42.697 18:05:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:42.697 18:05:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:42.697 Cannot find device "nvmf_tgt_br" 00:10:42.697 18:05:40 -- nvmf/common.sh@157 -- # true 00:10:42.697 18:05:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:42.697 Cannot find device "nvmf_tgt_br2" 00:10:42.697 18:05:40 -- nvmf/common.sh@158 -- # true 00:10:42.697 18:05:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:42.956 18:05:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:42.956 18:05:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.956 18:05:40 -- nvmf/common.sh@161 -- # true 00:10:42.956 18:05:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.956 18:05:40 -- nvmf/common.sh@162 -- # true 00:10:42.956 18:05:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.956 18:05:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.956 18:05:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.956 18:05:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.956 18:05:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.956 18:05:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.956 18:05:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.956 18:05:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:42.956 18:05:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:42.956 18:05:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:42.956 18:05:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:42.956 18:05:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:42.956 18:05:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:42.956 18:05:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.956 18:05:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.956 18:05:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.956 18:05:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:42.956 18:05:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:42.956 18:05:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.956 18:05:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.956 18:05:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.956 18:05:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.956 18:05:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.956 18:05:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:42.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:42.956 00:10:42.956 --- 10.0.0.2 ping statistics --- 00:10:42.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.956 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:42.956 18:05:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:42.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:42.956 00:10:42.956 --- 10.0.0.3 ping statistics --- 00:10:42.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.956 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:42.956 18:05:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:42.957 00:10:42.957 --- 10.0.0.1 ping statistics --- 00:10:42.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.957 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:42.957 18:05:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.957 18:05:40 -- nvmf/common.sh@421 -- # return 0 00:10:42.957 18:05:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:42.957 18:05:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.957 18:05:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:42.957 18:05:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:42.957 18:05:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.957 18:05:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:42.957 18:05:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:42.957 18:05:40 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:42.957 18:05:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:42.957 18:05:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:42.957 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:43.215 18:05:40 -- nvmf/common.sh@469 -- # nvmfpid=65560 00:10:43.215 18:05:40 -- nvmf/common.sh@470 -- # waitforlisten 65560 00:10:43.215 18:05:40 -- common/autotest_common.sh@819 -- # '[' -z 65560 ']' 00:10:43.215 18:05:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.215 18:05:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:43.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.215 18:05:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.215 18:05:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.215 18:05:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:43.215 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:43.215 [2024-04-25 18:05:40.950666] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:43.215 [2024-04-25 18:05:40.950756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.215 [2024-04-25 18:05:41.094334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.473 [2024-04-25 18:05:41.191253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:43.473 [2024-04-25 18:05:41.191423] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.473 [2024-04-25 18:05:41.191436] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.473 [2024-04-25 18:05:41.191446] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.473 [2024-04-25 18:05:41.191583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.473 [2024-04-25 18:05:41.192370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.473 [2024-04-25 18:05:41.192461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.473 [2024-04-25 18:05:41.192464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.406 18:05:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:44.406 18:05:42 -- common/autotest_common.sh@852 -- # return 0 00:10:44.406 18:05:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:44.406 18:05:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:44.406 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 18:05:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.406 18:05:42 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:44.406 18:05:42 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:44.406 18:05:42 -- target/multitarget.sh@21 -- # jq length 00:10:44.406 18:05:42 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:44.406 18:05:42 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:44.406 "nvmf_tgt_1" 00:10:44.406 18:05:42 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:44.665 "nvmf_tgt_2" 00:10:44.665 18:05:42 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:44.665 18:05:42 -- target/multitarget.sh@28 -- # jq length 00:10:44.665 18:05:42 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:44.665 18:05:42 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:44.924 true 00:10:44.924 18:05:42 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:44.924 true 00:10:45.182 18:05:42 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:45.182 18:05:42 -- target/multitarget.sh@35 -- # jq length 00:10:45.182 18:05:42 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:45.182 18:05:42 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:45.182 18:05:42 -- target/multitarget.sh@41 -- # nvmftestfini 00:10:45.182 18:05:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:45.182 18:05:42 -- nvmf/common.sh@116 -- # sync 00:10:45.182 18:05:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:45.182 18:05:43 -- nvmf/common.sh@119 -- # set +e 00:10:45.182 18:05:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:45.182 18:05:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:45.182 rmmod nvme_tcp 00:10:45.182 rmmod nvme_fabrics 00:10:45.182 rmmod nvme_keyring 00:10:45.182 18:05:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:45.182 18:05:43 -- nvmf/common.sh@123 -- # set -e 00:10:45.182 18:05:43 -- nvmf/common.sh@124 -- # return 0 00:10:45.182 18:05:43 -- nvmf/common.sh@477 -- # '[' -n 65560 ']' 00:10:45.182 18:05:43 -- nvmf/common.sh@478 -- # killprocess 65560 00:10:45.182 18:05:43 -- common/autotest_common.sh@926 -- # '[' -z 65560 ']' 00:10:45.182 18:05:43 -- common/autotest_common.sh@930 -- # kill -0 65560 00:10:45.182 18:05:43 -- common/autotest_common.sh@931 -- # uname 00:10:45.182 18:05:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:45.182 18:05:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65560 00:10:45.440 18:05:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:45.440 18:05:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:45.440 killing process with pid 65560 00:10:45.440 18:05:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65560' 00:10:45.440 18:05:43 -- common/autotest_common.sh@945 -- # kill 65560 00:10:45.440 18:05:43 -- common/autotest_common.sh@950 -- # wait 65560 00:10:45.699 18:05:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:45.699 18:05:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:45.699 18:05:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:45.699 18:05:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.699 18:05:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.699 18:05:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.699 18:05:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:45.699 00:10:45.699 real 0m2.989s 00:10:45.699 user 0m9.895s 00:10:45.699 sys 0m0.707s 00:10:45.699 18:05:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.699 18:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:45.699 ************************************ 00:10:45.699 END TEST nvmf_multitarget 00:10:45.699 ************************************ 00:10:45.699 18:05:43 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:45.699 18:05:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:45.699 18:05:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.699 18:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:45.699 ************************************ 00:10:45.699 START TEST nvmf_rpc 00:10:45.699 ************************************ 00:10:45.699 18:05:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:45.699 * Looking for test storage... 00:10:45.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.699 18:05:43 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.699 18:05:43 -- nvmf/common.sh@7 -- # uname -s 00:10:45.699 18:05:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.699 18:05:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.699 18:05:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.699 18:05:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.699 18:05:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.699 18:05:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.699 18:05:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.699 18:05:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.699 18:05:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.699 18:05:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:45.699 18:05:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:45.699 18:05:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.699 18:05:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.699 18:05:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.699 18:05:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.699 18:05:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.699 18:05:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.699 18:05:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.699 18:05:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.699 18:05:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.699 18:05:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.699 18:05:43 -- paths/export.sh@5 -- # export PATH 00:10:45.699 18:05:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.699 18:05:43 -- nvmf/common.sh@46 -- # : 0 00:10:45.699 18:05:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:45.699 18:05:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:45.699 18:05:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:45.699 18:05:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.699 18:05:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.699 18:05:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:45.699 18:05:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:45.699 18:05:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:45.699 18:05:43 -- target/rpc.sh@11 -- # loops=5 00:10:45.699 18:05:43 -- target/rpc.sh@23 -- # nvmftestinit 00:10:45.699 18:05:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:45.699 18:05:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.699 18:05:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:45.699 18:05:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:45.699 18:05:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:45.699 18:05:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.699 18:05:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.699 18:05:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.699 18:05:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:45.699 18:05:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:45.699 18:05:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.699 18:05:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.699 18:05:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:45.699 18:05:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:45.699 18:05:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.699 18:05:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.699 18:05:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.699 18:05:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.699 18:05:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.699 18:05:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.699 18:05:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.699 18:05:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.699 18:05:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:45.699 18:05:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:45.699 Cannot find device "nvmf_tgt_br" 00:10:45.699 18:05:43 -- nvmf/common.sh@154 -- # true 00:10:45.699 18:05:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.699 Cannot find device "nvmf_tgt_br2" 00:10:45.699 18:05:43 -- nvmf/common.sh@155 -- # true 00:10:45.699 18:05:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:45.699 18:05:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:45.699 Cannot find device "nvmf_tgt_br" 00:10:45.699 18:05:43 -- nvmf/common.sh@157 -- # true 00:10:45.699 18:05:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:45.958 Cannot find device "nvmf_tgt_br2" 00:10:45.958 18:05:43 -- nvmf/common.sh@158 -- # true 00:10:45.958 18:05:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:45.958 18:05:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:45.958 18:05:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.958 18:05:43 -- nvmf/common.sh@161 -- # true 00:10:45.958 18:05:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.958 18:05:43 -- nvmf/common.sh@162 -- # true 00:10:45.958 18:05:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.958 18:05:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.958 18:05:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.958 18:05:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.958 18:05:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.958 18:05:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.958 18:05:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.958 18:05:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:45.958 18:05:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:45.958 18:05:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:45.958 18:05:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:45.958 18:05:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:45.958 18:05:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:45.958 18:05:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.958 18:05:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.958 18:05:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.958 18:05:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:45.958 18:05:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:45.958 18:05:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.958 18:05:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.958 18:05:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.958 18:05:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.958 18:05:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.958 18:05:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:45.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:10:45.958 00:10:45.958 --- 10.0.0.2 ping statistics --- 00:10:45.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.958 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:45.958 18:05:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:46.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:46.217 00:10:46.217 --- 10.0.0.3 ping statistics --- 00:10:46.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.217 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:46.217 18:05:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:46.217 00:10:46.217 --- 10.0.0.1 ping statistics --- 00:10:46.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.217 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:46.217 18:05:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.217 18:05:43 -- nvmf/common.sh@421 -- # return 0 00:10:46.217 18:05:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:46.217 18:05:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.217 18:05:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:46.217 18:05:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:46.217 18:05:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.217 18:05:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:46.217 18:05:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:46.217 18:05:43 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:46.217 18:05:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:46.217 18:05:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:46.217 18:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:46.217 18:05:43 -- nvmf/common.sh@469 -- # nvmfpid=65793 00:10:46.217 18:05:43 -- nvmf/common.sh@470 -- # waitforlisten 65793 00:10:46.217 18:05:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.217 18:05:43 -- common/autotest_common.sh@819 -- # '[' -z 65793 ']' 00:10:46.217 18:05:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.217 18:05:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:46.217 18:05:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.217 18:05:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:46.217 18:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:46.217 [2024-04-25 18:05:43.981826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:46.217 [2024-04-25 18:05:43.981933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.217 [2024-04-25 18:05:44.129939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.475 [2024-04-25 18:05:44.231822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:46.475 [2024-04-25 18:05:44.231953] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.475 [2024-04-25 18:05:44.231965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.475 [2024-04-25 18:05:44.231973] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.475 [2024-04-25 18:05:44.232174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.475 [2024-04-25 18:05:44.234908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.475 [2024-04-25 18:05:44.235080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.475 [2024-04-25 18:05:44.235204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.041 18:05:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:47.041 18:05:44 -- common/autotest_common.sh@852 -- # return 0 00:10:47.041 18:05:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:47.041 18:05:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:47.041 18:05:44 -- common/autotest_common.sh@10 -- # set +x 00:10:47.041 18:05:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.041 18:05:44 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:47.041 18:05:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.041 18:05:44 -- common/autotest_common.sh@10 -- # set +x 00:10:47.300 18:05:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.300 18:05:44 -- target/rpc.sh@26 -- # stats='{ 00:10:47.300 "poll_groups": [ 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_0", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_1", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_2", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_3", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [] 00:10:47.300 } 00:10:47.300 ], 00:10:47.300 "tick_rate": 2200000000 00:10:47.300 }' 00:10:47.300 18:05:44 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:47.300 18:05:44 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:47.300 18:05:44 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:47.300 18:05:44 -- target/rpc.sh@15 -- # wc -l 00:10:47.300 18:05:45 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:47.300 18:05:45 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:47.300 18:05:45 -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:47.300 18:05:45 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.300 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.300 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.300 [2024-04-25 18:05:45.096570] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.300 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.300 18:05:45 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:47.300 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.300 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.300 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.300 18:05:45 -- target/rpc.sh@33 -- # stats='{ 00:10:47.300 "poll_groups": [ 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_0", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [ 00:10:47.300 { 00:10:47.300 "trtype": "TCP" 00:10:47.300 } 00:10:47.300 ] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_1", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [ 00:10:47.300 { 00:10:47.300 "trtype": "TCP" 00:10:47.300 } 00:10:47.300 ] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_2", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [ 00:10:47.300 { 00:10:47.300 "trtype": "TCP" 00:10:47.300 } 00:10:47.300 ] 00:10:47.300 }, 00:10:47.300 { 00:10:47.300 "admin_qpairs": 0, 00:10:47.300 "completed_nvme_io": 0, 00:10:47.300 "current_admin_qpairs": 0, 00:10:47.300 "current_io_qpairs": 0, 00:10:47.300 "io_qpairs": 0, 00:10:47.300 "name": "nvmf_tgt_poll_group_3", 00:10:47.300 "pending_bdev_io": 0, 00:10:47.300 "transports": [ 00:10:47.300 { 00:10:47.300 "trtype": "TCP" 00:10:47.300 } 00:10:47.300 ] 00:10:47.300 } 00:10:47.300 ], 00:10:47.300 "tick_rate": 2200000000 00:10:47.300 }' 00:10:47.300 18:05:45 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:47.300 18:05:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:47.300 18:05:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.300 18:05:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:47.300 18:05:45 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:47.300 18:05:45 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:47.300 18:05:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:47.300 18:05:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.300 18:05:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:47.585 18:05:45 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:47.585 18:05:45 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:47.585 18:05:45 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:47.585 18:05:45 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:47.585 18:05:45 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:47.585 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.585 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 Malloc1 00:10:47.585 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.585 18:05:45 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.585 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.585 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.585 18:05:45 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.585 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.585 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.585 18:05:45 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:47.585 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.585 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.585 18:05:45 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.585 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.585 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 [2024-04-25 18:05:45.318286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.585 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.585 18:05:45 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -a 10.0.0.2 -s 4420 00:10:47.585 18:05:45 -- common/autotest_common.sh@640 -- # local es=0 00:10:47.585 18:05:45 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -a 10.0.0.2 -s 4420 00:10:47.585 18:05:45 -- common/autotest_common.sh@628 -- # local arg=nvme 00:10:47.585 18:05:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.585 18:05:45 -- common/autotest_common.sh@632 -- # type -t nvme 00:10:47.585 18:05:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.585 18:05:45 -- common/autotest_common.sh@634 -- # type -P nvme 00:10:47.585 18:05:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.585 18:05:45 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:10:47.585 18:05:45 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:10:47.585 18:05:45 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -a 10.0.0.2 -s 4420 00:10:47.585 [2024-04-25 18:05:45.344177] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11' 00:10:47.585 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:47.585 could not add new controller: failed to write to nvme-fabrics device 00:10:47.585 18:05:45 -- common/autotest_common.sh@643 -- # es=1 00:10:47.585 18:05:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:47.585 18:05:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:47.586 18:05:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:47.586 18:05:45 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:47.586 18:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.586 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:47.586 18:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.586 18:05:45 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.852 18:05:45 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.852 18:05:45 -- common/autotest_common.sh@1177 -- # local i=0 00:10:47.852 18:05:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.852 18:05:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:47.852 18:05:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:49.755 18:05:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:49.755 18:05:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:49.755 18:05:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.755 18:05:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:49.755 18:05:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.755 18:05:47 -- common/autotest_common.sh@1187 -- # return 0 00:10:49.755 18:05:47 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.755 18:05:47 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.755 18:05:47 -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.755 18:05:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:49.755 18:05:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.755 18:05:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:49.755 18:05:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.755 18:05:47 -- common/autotest_common.sh@1210 -- # return 0 00:10:49.755 18:05:47 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:10:49.755 18:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.755 18:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:49.755 18:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.755 18:05:47 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.755 18:05:47 -- common/autotest_common.sh@640 -- # local es=0 00:10:49.755 18:05:47 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.755 18:05:47 -- common/autotest_common.sh@628 -- # local arg=nvme 00:10:49.755 18:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.755 18:05:47 -- common/autotest_common.sh@632 -- # type -t nvme 00:10:49.755 18:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.755 18:05:47 -- common/autotest_common.sh@634 -- # type -P nvme 00:10:49.755 18:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.755 18:05:47 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:10:49.755 18:05:47 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:10:49.755 18:05:47 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.755 [2024-04-25 18:05:47.644566] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11' 00:10:49.755 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:49.755 could not add new controller: failed to write to nvme-fabrics device 00:10:49.755 18:05:47 -- common/autotest_common.sh@643 -- # es=1 00:10:49.755 18:05:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:49.755 18:05:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:49.755 18:05:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:49.755 18:05:47 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:49.755 18:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.755 18:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:49.755 18:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.755 18:05:47 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.014 18:05:47 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.014 18:05:47 -- common/autotest_common.sh@1177 -- # local i=0 00:10:50.014 18:05:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.014 18:05:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:50.014 18:05:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:51.919 18:05:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:51.919 18:05:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:51.919 18:05:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.919 18:05:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:51.920 18:05:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.920 18:05:49 -- common/autotest_common.sh@1187 -- # return 0 00:10:51.920 18:05:49 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.178 18:05:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.178 18:05:49 -- common/autotest_common.sh@1198 -- # local i=0 00:10:52.178 18:05:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:52.178 18:05:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.178 18:05:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:52.178 18:05:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.178 18:05:49 -- common/autotest_common.sh@1210 -- # return 0 00:10:52.178 18:05:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.178 18:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.178 18:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 18:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.178 18:05:49 -- target/rpc.sh@81 -- # seq 1 5 00:10:52.178 18:05:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:52.178 18:05:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.178 18:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.178 18:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 18:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.178 18:05:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.178 18:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.178 18:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 [2024-04-25 18:05:49.936566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.178 18:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.178 18:05:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:52.178 18:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.178 18:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 18:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.178 18:05:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.178 18:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.178 18:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 18:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.178 18:05:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.437 18:05:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.437 18:05:50 -- common/autotest_common.sh@1177 -- # local i=0 00:10:52.437 18:05:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.437 18:05:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:52.437 18:05:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:54.340 18:05:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:54.340 18:05:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:54.340 18:05:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.340 18:05:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:54.341 18:05:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.341 18:05:52 -- common/autotest_common.sh@1187 -- # return 0 00:10:54.341 18:05:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.601 18:05:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.601 18:05:52 -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.601 18:05:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:54.601 18:05:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.601 18:05:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.601 18:05:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:54.601 18:05:52 -- common/autotest_common.sh@1210 -- # return 0 00:10:54.601 18:05:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.601 18:05:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 [2024-04-25 18:05:52.350223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.601 18:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.601 18:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 18:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.601 18:05:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.860 18:05:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.860 18:05:52 -- common/autotest_common.sh@1177 -- # local i=0 00:10:54.860 18:05:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.860 18:05:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:54.860 18:05:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:56.760 18:05:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:56.760 18:05:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:56.760 18:05:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.760 18:05:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:56.760 18:05:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.760 18:05:54 -- common/autotest_common.sh@1187 -- # return 0 00:10:56.760 18:05:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.760 18:05:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.760 18:05:54 -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.760 18:05:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:56.760 18:05:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.760 18:05:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:56.760 18:05:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.760 18:05:54 -- common/autotest_common.sh@1210 -- # return 0 00:10:56.760 18:05:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:56.760 18:05:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 [2024-04-25 18:05:54.659110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.760 18:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.760 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 18:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.760 18:05:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.020 18:05:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.020 18:05:54 -- common/autotest_common.sh@1177 -- # local i=0 00:10:57.020 18:05:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.020 18:05:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:57.020 18:05:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:59.553 18:05:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:59.553 18:05:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:59.553 18:05:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.553 18:05:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:59.553 18:05:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.554 18:05:56 -- common/autotest_common.sh@1187 -- # return 0 00:10:59.554 18:05:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.554 18:05:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.554 18:05:57 -- common/autotest_common.sh@1198 -- # local i=0 00:10:59.554 18:05:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:59.554 18:05:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.554 18:05:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:59.554 18:05:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.554 18:05:57 -- common/autotest_common.sh@1210 -- # return 0 00:10:59.554 18:05:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:59.554 18:05:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 [2024-04-25 18:05:57.068733] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.554 18:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.554 18:05:57 -- common/autotest_common.sh@10 -- # set +x 00:10:59.554 18:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.554 18:05:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.554 18:05:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.554 18:05:57 -- common/autotest_common.sh@1177 -- # local i=0 00:10:59.554 18:05:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.554 18:05:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:59.554 18:05:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:01.458 18:05:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:01.458 18:05:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:01.458 18:05:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.458 18:05:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:01.458 18:05:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.458 18:05:59 -- common/autotest_common.sh@1187 -- # return 0 00:11:01.458 18:05:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.458 18:05:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.458 18:05:59 -- common/autotest_common.sh@1198 -- # local i=0 00:11:01.458 18:05:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:01.458 18:05:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.458 18:05:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:01.458 18:05:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.458 18:05:59 -- common/autotest_common.sh@1210 -- # return 0 00:11:01.458 18:05:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.458 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.458 18:05:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.458 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.458 18:05:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:01.458 18:05:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.458 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.458 18:05:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.458 [2024-04-25 18:05:59.374504] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.458 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.458 18:05:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.458 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.458 18:05:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:01.458 18:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:01.458 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:01.717 18:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:01.717 18:05:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.717 18:05:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.717 18:05:59 -- common/autotest_common.sh@1177 -- # local i=0 00:11:01.717 18:05:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.717 18:05:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:01.717 18:05:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:04.249 18:06:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:04.249 18:06:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:04.249 18:06:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.249 18:06:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:04.249 18:06:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.249 18:06:01 -- common/autotest_common.sh@1187 -- # return 0 00:11:04.249 18:06:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.249 18:06:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.249 18:06:01 -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.249 18:06:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.249 18:06:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:04.249 18:06:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:04.249 18:06:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.249 18:06:01 -- common/autotest_common.sh@1210 -- # return 0 00:11:04.249 18:06:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.249 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.249 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.249 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.249 18:06:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.249 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.249 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.249 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.249 18:06:01 -- target/rpc.sh@99 -- # seq 1 5 00:11:04.249 18:06:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:04.249 18:06:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.249 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.249 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.249 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 [2024-04-25 18:06:01.679350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:04.250 18:06:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 [2024-04-25 18:06:01.727371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:04.250 18:06:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 [2024-04-25 18:06:01.779450] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:04.250 18:06:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 [2024-04-25 18:06:01.831525] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:04.250 18:06:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 [2024-04-25 18:06:01.879591] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:04.250 18:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:04.250 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:04.250 18:06:01 -- target/rpc.sh@110 -- # stats='{ 00:11:04.250 "poll_groups": [ 00:11:04.250 { 00:11:04.250 "admin_qpairs": 2, 00:11:04.250 "completed_nvme_io": 66, 00:11:04.250 "current_admin_qpairs": 0, 00:11:04.250 "current_io_qpairs": 0, 00:11:04.250 "io_qpairs": 16, 00:11:04.250 "name": "nvmf_tgt_poll_group_0", 00:11:04.250 "pending_bdev_io": 0, 00:11:04.250 "transports": [ 00:11:04.250 { 00:11:04.250 "trtype": "TCP" 00:11:04.250 } 00:11:04.250 ] 00:11:04.250 }, 00:11:04.250 { 00:11:04.250 "admin_qpairs": 3, 00:11:04.250 "completed_nvme_io": 68, 00:11:04.250 "current_admin_qpairs": 0, 00:11:04.250 "current_io_qpairs": 0, 00:11:04.250 "io_qpairs": 17, 00:11:04.250 "name": "nvmf_tgt_poll_group_1", 00:11:04.250 "pending_bdev_io": 0, 00:11:04.250 "transports": [ 00:11:04.250 { 00:11:04.250 "trtype": "TCP" 00:11:04.250 } 00:11:04.250 ] 00:11:04.251 }, 00:11:04.251 { 00:11:04.251 "admin_qpairs": 1, 00:11:04.251 "completed_nvme_io": 117, 00:11:04.251 "current_admin_qpairs": 0, 00:11:04.251 "current_io_qpairs": 0, 00:11:04.251 "io_qpairs": 19, 00:11:04.251 "name": "nvmf_tgt_poll_group_2", 00:11:04.251 "pending_bdev_io": 0, 00:11:04.251 "transports": [ 00:11:04.251 { 00:11:04.251 "trtype": "TCP" 00:11:04.251 } 00:11:04.251 ] 00:11:04.251 }, 00:11:04.251 { 00:11:04.251 "admin_qpairs": 1, 00:11:04.251 "completed_nvme_io": 169, 00:11:04.251 "current_admin_qpairs": 0, 00:11:04.251 "current_io_qpairs": 0, 00:11:04.251 "io_qpairs": 18, 00:11:04.251 "name": "nvmf_tgt_poll_group_3", 00:11:04.251 "pending_bdev_io": 0, 00:11:04.251 "transports": [ 00:11:04.251 { 00:11:04.251 "trtype": "TCP" 00:11:04.251 } 00:11:04.251 ] 00:11:04.251 } 00:11:04.251 ], 00:11:04.251 "tick_rate": 2200000000 00:11:04.251 }' 00:11:04.251 18:06:01 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.251 18:06:01 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:04.251 18:06:01 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:04.251 18:06:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.251 18:06:02 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:04.251 18:06:02 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:04.251 18:06:02 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:04.251 18:06:02 -- target/rpc.sh@123 -- # nvmftestfini 00:11:04.251 18:06:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:04.251 18:06:02 -- nvmf/common.sh@116 -- # sync 00:11:04.251 18:06:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:04.251 18:06:02 -- nvmf/common.sh@119 -- # set +e 00:11:04.251 18:06:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:04.251 18:06:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:04.251 rmmod nvme_tcp 00:11:04.251 rmmod nvme_fabrics 00:11:04.251 rmmod nvme_keyring 00:11:04.251 18:06:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:04.251 18:06:02 -- nvmf/common.sh@123 -- # set -e 00:11:04.251 18:06:02 -- nvmf/common.sh@124 -- # return 0 00:11:04.251 18:06:02 -- nvmf/common.sh@477 -- # '[' -n 65793 ']' 00:11:04.251 18:06:02 -- nvmf/common.sh@478 -- # killprocess 65793 00:11:04.251 18:06:02 -- common/autotest_common.sh@926 -- # '[' -z 65793 ']' 00:11:04.251 18:06:02 -- common/autotest_common.sh@930 -- # kill -0 65793 00:11:04.251 18:06:02 -- common/autotest_common.sh@931 -- # uname 00:11:04.251 18:06:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:04.251 18:06:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65793 00:11:04.251 killing process with pid 65793 00:11:04.251 18:06:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:04.251 18:06:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:04.251 18:06:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65793' 00:11:04.251 18:06:02 -- common/autotest_common.sh@945 -- # kill 65793 00:11:04.251 18:06:02 -- common/autotest_common.sh@950 -- # wait 65793 00:11:04.509 18:06:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:04.509 18:06:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:04.509 18:06:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:04.509 18:06:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.509 18:06:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:04.509 18:06:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.509 18:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.509 18:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.767 18:06:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:04.767 00:11:04.767 real 0m19.009s 00:11:04.767 user 1m11.833s 00:11:04.767 sys 0m2.147s 00:11:04.767 18:06:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.767 ************************************ 00:11:04.767 END TEST nvmf_rpc 00:11:04.767 ************************************ 00:11:04.767 18:06:02 -- common/autotest_common.sh@10 -- # set +x 00:11:04.767 18:06:02 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:04.768 18:06:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:04.768 18:06:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.768 18:06:02 -- common/autotest_common.sh@10 -- # set +x 00:11:04.768 ************************************ 00:11:04.768 START TEST nvmf_invalid 00:11:04.768 ************************************ 00:11:04.768 18:06:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:04.768 * Looking for test storage... 00:11:04.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.768 18:06:02 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.768 18:06:02 -- nvmf/common.sh@7 -- # uname -s 00:11:04.768 18:06:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.768 18:06:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.768 18:06:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.768 18:06:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.768 18:06:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.768 18:06:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.768 18:06:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.768 18:06:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.768 18:06:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.768 18:06:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:04.768 18:06:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:04.768 18:06:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.768 18:06:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.768 18:06:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.768 18:06:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.768 18:06:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.768 18:06:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.768 18:06:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.768 18:06:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.768 18:06:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.768 18:06:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.768 18:06:02 -- paths/export.sh@5 -- # export PATH 00:11:04.768 18:06:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.768 18:06:02 -- nvmf/common.sh@46 -- # : 0 00:11:04.768 18:06:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:04.768 18:06:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:04.768 18:06:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:04.768 18:06:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.768 18:06:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.768 18:06:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:04.768 18:06:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:04.768 18:06:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:04.768 18:06:02 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:04.768 18:06:02 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:04.768 18:06:02 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:04.768 18:06:02 -- target/invalid.sh@14 -- # target=foobar 00:11:04.768 18:06:02 -- target/invalid.sh@16 -- # RANDOM=0 00:11:04.768 18:06:02 -- target/invalid.sh@34 -- # nvmftestinit 00:11:04.768 18:06:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:04.768 18:06:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.768 18:06:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:04.768 18:06:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:04.768 18:06:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:04.768 18:06:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.768 18:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.768 18:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.768 18:06:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:04.768 18:06:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:04.768 18:06:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.768 18:06:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.768 18:06:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:04.768 18:06:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:04.768 18:06:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.768 18:06:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.768 18:06:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.768 18:06:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.768 18:06:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.768 18:06:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.768 18:06:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.768 18:06:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.768 18:06:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:04.768 18:06:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:04.768 Cannot find device "nvmf_tgt_br" 00:11:04.768 18:06:02 -- nvmf/common.sh@154 -- # true 00:11:04.768 18:06:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.768 Cannot find device "nvmf_tgt_br2" 00:11:04.768 18:06:02 -- nvmf/common.sh@155 -- # true 00:11:04.768 18:06:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:04.768 18:06:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:04.768 Cannot find device "nvmf_tgt_br" 00:11:04.768 18:06:02 -- nvmf/common.sh@157 -- # true 00:11:04.768 18:06:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:05.026 Cannot find device "nvmf_tgt_br2" 00:11:05.026 18:06:02 -- nvmf/common.sh@158 -- # true 00:11:05.026 18:06:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:05.026 18:06:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:05.026 18:06:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.026 18:06:02 -- nvmf/common.sh@161 -- # true 00:11:05.026 18:06:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.026 18:06:02 -- nvmf/common.sh@162 -- # true 00:11:05.026 18:06:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.026 18:06:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.026 18:06:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.026 18:06:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.026 18:06:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.026 18:06:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.026 18:06:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.026 18:06:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.026 18:06:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.026 18:06:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:05.026 18:06:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:05.026 18:06:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:05.026 18:06:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:05.027 18:06:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.027 18:06:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.027 18:06:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.294 18:06:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:05.294 18:06:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:05.294 18:06:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.294 18:06:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.294 18:06:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.294 18:06:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.294 18:06:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.294 18:06:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:05.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:05.294 00:11:05.294 --- 10.0.0.2 ping statistics --- 00:11:05.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.294 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:05.294 18:06:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:05.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:05.294 00:11:05.294 --- 10.0.0.3 ping statistics --- 00:11:05.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.294 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:05.294 18:06:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:11:05.294 00:11:05.294 --- 10.0.0.1 ping statistics --- 00:11:05.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.294 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:05.294 18:06:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.294 18:06:03 -- nvmf/common.sh@421 -- # return 0 00:11:05.294 18:06:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:05.294 18:06:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.294 18:06:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:05.294 18:06:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:05.294 18:06:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.294 18:06:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:05.294 18:06:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:05.294 18:06:03 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:05.294 18:06:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:05.294 18:06:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:05.294 18:06:03 -- common/autotest_common.sh@10 -- # set +x 00:11:05.294 18:06:03 -- nvmf/common.sh@469 -- # nvmfpid=66308 00:11:05.294 18:06:03 -- nvmf/common.sh@470 -- # waitforlisten 66308 00:11:05.294 18:06:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.294 18:06:03 -- common/autotest_common.sh@819 -- # '[' -z 66308 ']' 00:11:05.294 18:06:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.294 18:06:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:05.294 18:06:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.294 18:06:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:05.294 18:06:03 -- common/autotest_common.sh@10 -- # set +x 00:11:05.294 [2024-04-25 18:06:03.106825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:05.294 [2024-04-25 18:06:03.107486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.560 [2024-04-25 18:06:03.247017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.560 [2024-04-25 18:06:03.328399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.560 [2024-04-25 18:06:03.328570] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.560 [2024-04-25 18:06:03.328583] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.560 [2024-04-25 18:06:03.328591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.560 [2024-04-25 18:06:03.328703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.561 [2024-04-25 18:06:03.329030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.561 [2024-04-25 18:06:03.329041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.561 [2024-04-25 18:06:03.329801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.124 18:06:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:06.124 18:06:04 -- common/autotest_common.sh@852 -- # return 0 00:11:06.124 18:06:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.124 18:06:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:06.124 18:06:04 -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 18:06:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.382 18:06:04 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:06.382 18:06:04 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25570 00:11:06.639 [2024-04-25 18:06:04.342598] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:06.639 18:06:04 -- target/invalid.sh@40 -- # out='2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25570 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:06.639 request: 00:11:06.639 { 00:11:06.639 "method": "nvmf_create_subsystem", 00:11:06.639 "params": { 00:11:06.639 "nqn": "nqn.2016-06.io.spdk:cnode25570", 00:11:06.639 "tgt_name": "foobar" 00:11:06.639 } 00:11:06.639 } 00:11:06.639 Got JSON-RPC error response 00:11:06.639 GoRPCClient: error on JSON-RPC call' 00:11:06.639 18:06:04 -- target/invalid.sh@41 -- # [[ 2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25570 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:06.639 request: 00:11:06.639 { 00:11:06.639 "method": "nvmf_create_subsystem", 00:11:06.639 "params": { 00:11:06.639 "nqn": "nqn.2016-06.io.spdk:cnode25570", 00:11:06.639 "tgt_name": "foobar" 00:11:06.639 } 00:11:06.639 } 00:11:06.639 Got JSON-RPC error response 00:11:06.639 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:06.639 18:06:04 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:06.639 18:06:04 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32273 00:11:06.897 [2024-04-25 18:06:04.579000] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32273: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:06.897 18:06:04 -- target/invalid.sh@45 -- # out='2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32273 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:06.897 request: 00:11:06.897 { 00:11:06.897 "method": "nvmf_create_subsystem", 00:11:06.897 "params": { 00:11:06.897 "nqn": "nqn.2016-06.io.spdk:cnode32273", 00:11:06.897 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:06.897 } 00:11:06.897 } 00:11:06.897 Got JSON-RPC error response 00:11:06.897 GoRPCClient: error on JSON-RPC call' 00:11:06.897 18:06:04 -- target/invalid.sh@46 -- # [[ 2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32273 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:06.897 request: 00:11:06.897 { 00:11:06.897 "method": "nvmf_create_subsystem", 00:11:06.897 "params": { 00:11:06.897 "nqn": "nqn.2016-06.io.spdk:cnode32273", 00:11:06.897 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:06.897 } 00:11:06.897 } 00:11:06.897 Got JSON-RPC error response 00:11:06.897 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:06.897 18:06:04 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:06.897 18:06:04 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16552 00:11:06.897 [2024-04-25 18:06:04.803346] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16552: invalid model number 'SPDK_Controller' 00:11:06.897 18:06:04 -- target/invalid.sh@50 -- # out='2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16552], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:06.897 request: 00:11:06.897 { 00:11:06.897 "method": "nvmf_create_subsystem", 00:11:06.897 "params": { 00:11:06.897 "nqn": "nqn.2016-06.io.spdk:cnode16552", 00:11:06.897 "model_number": "SPDK_Controller\u001f" 00:11:06.897 } 00:11:06.897 } 00:11:06.897 Got JSON-RPC error response 00:11:06.897 GoRPCClient: error on JSON-RPC call' 00:11:06.897 18:06:04 -- target/invalid.sh@51 -- # [[ 2024/04/25 18:06:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16552], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:06.897 request: 00:11:06.897 { 00:11:06.897 "method": "nvmf_create_subsystem", 00:11:06.897 "params": { 00:11:06.897 "nqn": "nqn.2016-06.io.spdk:cnode16552", 00:11:06.897 "model_number": "SPDK_Controller\u001f" 00:11:06.897 } 00:11:06.897 } 00:11:06.897 Got JSON-RPC error response 00:11:06.897 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:06.897 18:06:04 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:06.897 18:06:04 -- target/invalid.sh@19 -- # local length=21 ll 00:11:06.897 18:06:04 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:06.897 18:06:04 -- target/invalid.sh@21 -- # local chars 00:11:06.897 18:06:04 -- target/invalid.sh@22 -- # local string 00:11:06.897 18:06:04 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:06.897 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 125 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+='}' 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 85 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=U 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 102 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=f 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 84 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=T 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 67 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=C 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 51 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=3 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 118 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=v 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 99 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=c 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 84 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=T 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # printf %x 89 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:07.155 18:06:04 -- target/invalid.sh@25 -- # string+=Y 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.155 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 99 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=c 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 107 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=k 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 46 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=. 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 81 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=Q 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 106 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=j 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 96 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+='`' 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 107 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=k 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 70 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=F 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 58 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=: 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 51 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+=3 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # printf %x 40 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:07.156 18:06:04 -- target/invalid.sh@25 -- # string+='(' 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.156 18:06:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.156 18:06:04 -- target/invalid.sh@28 -- # [[ } == \- ]] 00:11:07.156 18:06:04 -- target/invalid.sh@31 -- # echo '}UfTC3vcTYck.Qj`kF:3(' 00:11:07.156 18:06:04 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '}UfTC3vcTYck.Qj`kF:3(' nqn.2016-06.io.spdk:cnode14745 00:11:07.414 [2024-04-25 18:06:05.123742] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14745: invalid serial number '}UfTC3vcTYck.Qj`kF:3(' 00:11:07.415 18:06:05 -- target/invalid.sh@54 -- # out='2024/04/25 18:06:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14745 serial_number:}UfTC3vcTYck.Qj`kF:3(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN }UfTC3vcTYck.Qj`kF:3( 00:11:07.415 request: 00:11:07.415 { 00:11:07.415 "method": "nvmf_create_subsystem", 00:11:07.415 "params": { 00:11:07.415 "nqn": "nqn.2016-06.io.spdk:cnode14745", 00:11:07.415 "serial_number": "}UfTC3vcTYck.Qj`kF:3(" 00:11:07.415 } 00:11:07.415 } 00:11:07.415 Got JSON-RPC error response 00:11:07.415 GoRPCClient: error on JSON-RPC call' 00:11:07.415 18:06:05 -- target/invalid.sh@55 -- # [[ 2024/04/25 18:06:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14745 serial_number:}UfTC3vcTYck.Qj`kF:3(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN }UfTC3vcTYck.Qj`kF:3( 00:11:07.415 request: 00:11:07.415 { 00:11:07.415 "method": "nvmf_create_subsystem", 00:11:07.415 "params": { 00:11:07.415 "nqn": "nqn.2016-06.io.spdk:cnode14745", 00:11:07.415 "serial_number": "}UfTC3vcTYck.Qj`kF:3(" 00:11:07.415 } 00:11:07.415 } 00:11:07.415 Got JSON-RPC error response 00:11:07.415 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:07.415 18:06:05 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:07.415 18:06:05 -- target/invalid.sh@19 -- # local length=41 ll 00:11:07.415 18:06:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:07.415 18:06:05 -- target/invalid.sh@21 -- # local chars 00:11:07.415 18:06:05 -- target/invalid.sh@22 -- # local string 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 84 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=T 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 100 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=d 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 69 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=E 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 50 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=2 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 122 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=z 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 57 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=9 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 92 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+='\' 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 35 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+='#' 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 70 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=F 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 107 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=k 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 68 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=D 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 118 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=v 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 98 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=b 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 99 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=c 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 97 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=a 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 122 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=z 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 72 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=H 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 49 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=1 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 116 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=t 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 113 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=q 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 64 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=@ 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 37 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=% 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 39 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=\' 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 85 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=U 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 102 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=f 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 94 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+='^' 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # printf %x 98 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:07.415 18:06:05 -- target/invalid.sh@25 -- # string+=b 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.415 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 76 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=L 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 117 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=u 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 71 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=G 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 42 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+='*' 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 43 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=+ 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 79 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=O 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 50 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=2 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 59 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=';' 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 34 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+='"' 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 89 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=Y 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 49 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=1 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 93 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=']' 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 99 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=c 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # printf %x 117 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:07.416 18:06:05 -- target/invalid.sh@25 -- # string+=u 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:07.416 18:06:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:07.416 18:06:05 -- target/invalid.sh@28 -- # [[ T == \- ]] 00:11:07.416 18:06:05 -- target/invalid.sh@31 -- # echo 'TdE2z9\#FkDvbcazH1tq@%'\''Uf^bLuG*+O2;"Y1]cu' 00:11:07.416 18:06:05 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'TdE2z9\#FkDvbcazH1tq@%'\''Uf^bLuG*+O2;"Y1]cu' nqn.2016-06.io.spdk:cnode19255 00:11:07.674 [2024-04-25 18:06:05.592503] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19255: invalid model number 'TdE2z9\#FkDvbcazH1tq@%'Uf^bLuG*+O2;"Y1]cu' 00:11:07.932 18:06:05 -- target/invalid.sh@58 -- # out='2024/04/25 18:06:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:TdE2z9\#FkDvbcazH1tq@%'\''Uf^bLuG*+O2;"Y1]cu nqn:nqn.2016-06.io.spdk:cnode19255], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN TdE2z9\#FkDvbcazH1tq@%'\''Uf^bLuG*+O2;"Y1]cu 00:11:07.932 request: 00:11:07.932 { 00:11:07.932 "method": "nvmf_create_subsystem", 00:11:07.932 "params": { 00:11:07.932 "nqn": "nqn.2016-06.io.spdk:cnode19255", 00:11:07.932 "model_number": "TdE2z9\\#FkDvbcazH1tq@%'\''Uf^bLuG*+O2;\"Y1]cu" 00:11:07.932 } 00:11:07.932 } 00:11:07.932 Got JSON-RPC error response 00:11:07.932 GoRPCClient: error on JSON-RPC call' 00:11:07.932 18:06:05 -- target/invalid.sh@59 -- # [[ 2024/04/25 18:06:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:TdE2z9\#FkDvbcazH1tq@%'Uf^bLuG*+O2;"Y1]cu nqn:nqn.2016-06.io.spdk:cnode19255], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN TdE2z9\#FkDvbcazH1tq@%'Uf^bLuG*+O2;"Y1]cu 00:11:07.932 request: 00:11:07.932 { 00:11:07.932 "method": "nvmf_create_subsystem", 00:11:07.932 "params": { 00:11:07.932 "nqn": "nqn.2016-06.io.spdk:cnode19255", 00:11:07.932 "model_number": "TdE2z9\\#FkDvbcazH1tq@%'Uf^bLuG*+O2;\"Y1]cu" 00:11:07.932 } 00:11:07.932 } 00:11:07.932 Got JSON-RPC error response 00:11:07.932 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:07.932 18:06:05 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:07.932 [2024-04-25 18:06:05.864993] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.190 18:06:05 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:08.449 18:06:06 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:08.449 18:06:06 -- target/invalid.sh@67 -- # head -n 1 00:11:08.449 18:06:06 -- target/invalid.sh@67 -- # echo '' 00:11:08.449 18:06:06 -- target/invalid.sh@67 -- # IP= 00:11:08.449 18:06:06 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:08.707 [2024-04-25 18:06:06.425992] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:08.707 18:06:06 -- target/invalid.sh@69 -- # out='2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:08.707 request: 00:11:08.707 { 00:11:08.707 "method": "nvmf_subsystem_remove_listener", 00:11:08.707 "params": { 00:11:08.707 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:08.707 "listen_address": { 00:11:08.707 "trtype": "tcp", 00:11:08.707 "traddr": "", 00:11:08.707 "trsvcid": "4421" 00:11:08.707 } 00:11:08.707 } 00:11:08.707 } 00:11:08.707 Got JSON-RPC error response 00:11:08.707 GoRPCClient: error on JSON-RPC call' 00:11:08.707 18:06:06 -- target/invalid.sh@70 -- # [[ 2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:08.707 request: 00:11:08.707 { 00:11:08.707 "method": "nvmf_subsystem_remove_listener", 00:11:08.707 "params": { 00:11:08.707 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:08.707 "listen_address": { 00:11:08.707 "trtype": "tcp", 00:11:08.707 "traddr": "", 00:11:08.707 "trsvcid": "4421" 00:11:08.707 } 00:11:08.707 } 00:11:08.707 } 00:11:08.707 Got JSON-RPC error response 00:11:08.707 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:08.707 18:06:06 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17910 -i 0 00:11:08.965 [2024-04-25 18:06:06.658333] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17910: invalid cntlid range [0-65519] 00:11:08.965 18:06:06 -- target/invalid.sh@73 -- # out='2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17910], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:08.965 request: 00:11:08.965 { 00:11:08.965 "method": "nvmf_create_subsystem", 00:11:08.965 "params": { 00:11:08.965 "nqn": "nqn.2016-06.io.spdk:cnode17910", 00:11:08.965 "min_cntlid": 0 00:11:08.965 } 00:11:08.965 } 00:11:08.965 Got JSON-RPC error response 00:11:08.965 GoRPCClient: error on JSON-RPC call' 00:11:08.965 18:06:06 -- target/invalid.sh@74 -- # [[ 2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17910], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:08.965 request: 00:11:08.965 { 00:11:08.965 "method": "nvmf_create_subsystem", 00:11:08.965 "params": { 00:11:08.965 "nqn": "nqn.2016-06.io.spdk:cnode17910", 00:11:08.965 "min_cntlid": 0 00:11:08.965 } 00:11:08.965 } 00:11:08.965 Got JSON-RPC error response 00:11:08.965 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:08.965 18:06:06 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20421 -i 65520 00:11:08.965 [2024-04-25 18:06:06.878619] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20421: invalid cntlid range [65520-65519] 00:11:08.965 18:06:06 -- target/invalid.sh@75 -- # out='2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20421], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:08.965 request: 00:11:08.965 { 00:11:08.965 "method": "nvmf_create_subsystem", 00:11:08.965 "params": { 00:11:08.965 "nqn": "nqn.2016-06.io.spdk:cnode20421", 00:11:08.965 "min_cntlid": 65520 00:11:08.965 } 00:11:08.965 } 00:11:08.965 Got JSON-RPC error response 00:11:08.965 GoRPCClient: error on JSON-RPC call' 00:11:08.966 18:06:06 -- target/invalid.sh@76 -- # [[ 2024/04/25 18:06:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20421], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:08.966 request: 00:11:08.966 { 00:11:08.966 "method": "nvmf_create_subsystem", 00:11:08.966 "params": { 00:11:08.966 "nqn": "nqn.2016-06.io.spdk:cnode20421", 00:11:08.966 "min_cntlid": 65520 00:11:08.966 } 00:11:08.966 } 00:11:08.966 Got JSON-RPC error response 00:11:08.966 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:09.224 18:06:06 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24668 -I 0 00:11:09.224 [2024-04-25 18:06:07.147083] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24668: invalid cntlid range [1-0] 00:11:09.482 18:06:07 -- target/invalid.sh@77 -- # out='2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24668], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:09.482 request: 00:11:09.482 { 00:11:09.482 "method": "nvmf_create_subsystem", 00:11:09.482 "params": { 00:11:09.482 "nqn": "nqn.2016-06.io.spdk:cnode24668", 00:11:09.482 "max_cntlid": 0 00:11:09.482 } 00:11:09.482 } 00:11:09.482 Got JSON-RPC error response 00:11:09.482 GoRPCClient: error on JSON-RPC call' 00:11:09.482 18:06:07 -- target/invalid.sh@78 -- # [[ 2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24668], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:09.482 request: 00:11:09.482 { 00:11:09.482 "method": "nvmf_create_subsystem", 00:11:09.482 "params": { 00:11:09.482 "nqn": "nqn.2016-06.io.spdk:cnode24668", 00:11:09.482 "max_cntlid": 0 00:11:09.482 } 00:11:09.482 } 00:11:09.482 Got JSON-RPC error response 00:11:09.482 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:09.482 18:06:07 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3791 -I 65520 00:11:09.482 [2024-04-25 18:06:07.375457] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3791: invalid cntlid range [1-65520] 00:11:09.482 18:06:07 -- target/invalid.sh@79 -- # out='2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3791], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:09.482 request: 00:11:09.482 { 00:11:09.482 "method": "nvmf_create_subsystem", 00:11:09.482 "params": { 00:11:09.482 "nqn": "nqn.2016-06.io.spdk:cnode3791", 00:11:09.482 "max_cntlid": 65520 00:11:09.482 } 00:11:09.482 } 00:11:09.482 Got JSON-RPC error response 00:11:09.482 GoRPCClient: error on JSON-RPC call' 00:11:09.482 18:06:07 -- target/invalid.sh@80 -- # [[ 2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3791], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:09.482 request: 00:11:09.482 { 00:11:09.482 "method": "nvmf_create_subsystem", 00:11:09.482 "params": { 00:11:09.482 "nqn": "nqn.2016-06.io.spdk:cnode3791", 00:11:09.482 "max_cntlid": 65520 00:11:09.482 } 00:11:09.482 } 00:11:09.482 Got JSON-RPC error response 00:11:09.482 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:09.482 18:06:07 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29733 -i 6 -I 5 00:11:09.739 [2024-04-25 18:06:07.647933] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29733: invalid cntlid range [6-5] 00:11:09.739 18:06:07 -- target/invalid.sh@83 -- # out='2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode29733], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:09.740 request: 00:11:09.740 { 00:11:09.740 "method": "nvmf_create_subsystem", 00:11:09.740 "params": { 00:11:09.740 "nqn": "nqn.2016-06.io.spdk:cnode29733", 00:11:09.740 "min_cntlid": 6, 00:11:09.740 "max_cntlid": 5 00:11:09.740 } 00:11:09.740 } 00:11:09.740 Got JSON-RPC error response 00:11:09.740 GoRPCClient: error on JSON-RPC call' 00:11:09.740 18:06:07 -- target/invalid.sh@84 -- # [[ 2024/04/25 18:06:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode29733], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:09.740 request: 00:11:09.740 { 00:11:09.740 "method": "nvmf_create_subsystem", 00:11:09.740 "params": { 00:11:09.740 "nqn": "nqn.2016-06.io.spdk:cnode29733", 00:11:09.740 "min_cntlid": 6, 00:11:09.740 "max_cntlid": 5 00:11:09.740 } 00:11:09.740 } 00:11:09.740 Got JSON-RPC error response 00:11:09.740 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:09.740 18:06:07 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:09.998 18:06:07 -- target/invalid.sh@87 -- # out='request: 00:11:09.998 { 00:11:09.998 "name": "foobar", 00:11:09.998 "method": "nvmf_delete_target", 00:11:09.998 "req_id": 1 00:11:09.998 } 00:11:09.998 Got JSON-RPC error response 00:11:09.998 response: 00:11:09.998 { 00:11:09.998 "code": -32602, 00:11:09.998 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:09.998 }' 00:11:09.998 18:06:07 -- target/invalid.sh@88 -- # [[ request: 00:11:09.998 { 00:11:09.998 "name": "foobar", 00:11:09.998 "method": "nvmf_delete_target", 00:11:09.998 "req_id": 1 00:11:09.998 } 00:11:09.998 Got JSON-RPC error response 00:11:09.998 response: 00:11:09.998 { 00:11:09.998 "code": -32602, 00:11:09.998 "message": "The specified target doesn't exist, cannot delete it." 00:11:09.998 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:09.998 18:06:07 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:09.998 18:06:07 -- target/invalid.sh@91 -- # nvmftestfini 00:11:09.998 18:06:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:09.998 18:06:07 -- nvmf/common.sh@116 -- # sync 00:11:09.998 18:06:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:09.998 18:06:07 -- nvmf/common.sh@119 -- # set +e 00:11:09.998 18:06:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:09.998 18:06:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:09.998 rmmod nvme_tcp 00:11:09.998 rmmod nvme_fabrics 00:11:09.998 rmmod nvme_keyring 00:11:09.998 18:06:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:09.998 18:06:07 -- nvmf/common.sh@123 -- # set -e 00:11:09.998 18:06:07 -- nvmf/common.sh@124 -- # return 0 00:11:09.998 18:06:07 -- nvmf/common.sh@477 -- # '[' -n 66308 ']' 00:11:09.998 18:06:07 -- nvmf/common.sh@478 -- # killprocess 66308 00:11:09.998 18:06:07 -- common/autotest_common.sh@926 -- # '[' -z 66308 ']' 00:11:09.998 18:06:07 -- common/autotest_common.sh@930 -- # kill -0 66308 00:11:09.998 18:06:07 -- common/autotest_common.sh@931 -- # uname 00:11:09.998 18:06:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:09.998 18:06:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66308 00:11:09.998 18:06:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:09.998 18:06:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:09.998 18:06:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66308' 00:11:09.998 killing process with pid 66308 00:11:09.998 18:06:07 -- common/autotest_common.sh@945 -- # kill 66308 00:11:09.998 18:06:07 -- common/autotest_common.sh@950 -- # wait 66308 00:11:10.256 18:06:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:10.256 18:06:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:10.256 18:06:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:10.256 18:06:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.256 18:06:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:10.256 18:06:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.256 18:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.256 18:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.256 18:06:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:10.256 00:11:10.256 real 0m5.636s 00:11:10.256 user 0m22.119s 00:11:10.256 sys 0m1.260s 00:11:10.256 18:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.256 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:11:10.256 ************************************ 00:11:10.256 END TEST nvmf_invalid 00:11:10.256 ************************************ 00:11:10.518 18:06:08 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:10.518 18:06:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:10.518 18:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.518 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:11:10.518 ************************************ 00:11:10.518 START TEST nvmf_abort 00:11:10.518 ************************************ 00:11:10.518 18:06:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:10.518 * Looking for test storage... 00:11:10.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.518 18:06:08 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.518 18:06:08 -- nvmf/common.sh@7 -- # uname -s 00:11:10.518 18:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.518 18:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.518 18:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.518 18:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.518 18:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.518 18:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.518 18:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.518 18:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.518 18:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.518 18:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.518 18:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:10.518 18:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:10.518 18:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.518 18:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.518 18:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.518 18:06:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.518 18:06:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.518 18:06:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.518 18:06:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.518 18:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.519 18:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.519 18:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.519 18:06:08 -- paths/export.sh@5 -- # export PATH 00:11:10.519 18:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.519 18:06:08 -- nvmf/common.sh@46 -- # : 0 00:11:10.519 18:06:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:10.519 18:06:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:10.519 18:06:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:10.519 18:06:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.519 18:06:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.519 18:06:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:10.519 18:06:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:10.519 18:06:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:10.519 18:06:08 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.519 18:06:08 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:10.519 18:06:08 -- target/abort.sh@14 -- # nvmftestinit 00:11:10.519 18:06:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:10.519 18:06:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.519 18:06:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:10.519 18:06:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:10.519 18:06:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:10.519 18:06:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.519 18:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.519 18:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.519 18:06:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:10.519 18:06:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:10.519 18:06:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:10.519 18:06:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:10.519 18:06:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:10.519 18:06:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:10.519 18:06:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.519 18:06:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.519 18:06:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:10.519 18:06:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:10.519 18:06:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.519 18:06:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.519 18:06:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.519 18:06:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.519 18:06:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.519 18:06:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.519 18:06:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.519 18:06:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.519 18:06:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:10.519 18:06:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:10.519 Cannot find device "nvmf_tgt_br" 00:11:10.519 18:06:08 -- nvmf/common.sh@154 -- # true 00:11:10.519 18:06:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.519 Cannot find device "nvmf_tgt_br2" 00:11:10.519 18:06:08 -- nvmf/common.sh@155 -- # true 00:11:10.519 18:06:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:10.519 18:06:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:10.519 Cannot find device "nvmf_tgt_br" 00:11:10.519 18:06:08 -- nvmf/common.sh@157 -- # true 00:11:10.519 18:06:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:10.519 Cannot find device "nvmf_tgt_br2" 00:11:10.519 18:06:08 -- nvmf/common.sh@158 -- # true 00:11:10.519 18:06:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:10.519 18:06:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:10.519 18:06:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.777 18:06:08 -- nvmf/common.sh@161 -- # true 00:11:10.777 18:06:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.777 18:06:08 -- nvmf/common.sh@162 -- # true 00:11:10.777 18:06:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:10.777 18:06:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:10.777 18:06:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.777 18:06:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.777 18:06:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.777 18:06:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.777 18:06:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.777 18:06:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:10.777 18:06:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:10.777 18:06:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:10.777 18:06:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:10.777 18:06:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:10.777 18:06:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:10.777 18:06:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:10.777 18:06:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:10.777 18:06:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:10.777 18:06:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:10.777 18:06:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:10.777 18:06:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:10.777 18:06:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:10.777 18:06:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.777 18:06:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.777 18:06:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.777 18:06:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:10.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:10.777 00:11:10.777 --- 10.0.0.2 ping statistics --- 00:11:10.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.777 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:10.777 18:06:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:10.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:11:10.777 00:11:10.777 --- 10.0.0.3 ping statistics --- 00:11:10.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.777 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:10.777 18:06:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:10.777 00:11:10.777 --- 10.0.0.1 ping statistics --- 00:11:10.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.777 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:10.777 18:06:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.777 18:06:08 -- nvmf/common.sh@421 -- # return 0 00:11:10.777 18:06:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:10.777 18:06:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.777 18:06:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:10.777 18:06:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:10.777 18:06:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.778 18:06:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:10.778 18:06:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:10.778 18:06:08 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:10.778 18:06:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:10.778 18:06:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:10.778 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:11:10.778 18:06:08 -- nvmf/common.sh@469 -- # nvmfpid=66819 00:11:10.778 18:06:08 -- nvmf/common.sh@470 -- # waitforlisten 66819 00:11:10.778 18:06:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:10.778 18:06:08 -- common/autotest_common.sh@819 -- # '[' -z 66819 ']' 00:11:10.778 18:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.778 18:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:10.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.778 18:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.778 18:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:10.778 18:06:08 -- common/autotest_common.sh@10 -- # set +x 00:11:10.778 [2024-04-25 18:06:08.703152] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:10.778 [2024-04-25 18:06:08.703264] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.035 [2024-04-25 18:06:08.836021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.035 [2024-04-25 18:06:08.932236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:11.035 [2024-04-25 18:06:08.932439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.035 [2024-04-25 18:06:08.932453] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.035 [2024-04-25 18:06:08.932462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.035 [2024-04-25 18:06:08.933748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.035 [2024-04-25 18:06:08.933986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.035 [2024-04-25 18:06:08.933996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.012 18:06:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:12.012 18:06:09 -- common/autotest_common.sh@852 -- # return 0 00:11:12.012 18:06:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:12.012 18:06:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 18:06:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.012 18:06:09 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 [2024-04-25 18:06:09.752404] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 Malloc0 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 Delay0 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 [2024-04-25 18:06:09.825639] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:12.012 18:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.012 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:11:12.012 18:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.012 18:06:09 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:12.270 [2024-04-25 18:06:10.005874] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:14.170 Initializing NVMe Controllers 00:11:14.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:14.170 controller IO queue size 128 less than required 00:11:14.170 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:14.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:14.170 Initialization complete. Launching workers. 00:11:14.170 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34836 00:11:14.170 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34897, failed to submit 62 00:11:14.170 success 34836, unsuccess 61, failed 0 00:11:14.170 18:06:12 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:14.170 18:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.170 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:14.170 18:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.170 18:06:12 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:14.170 18:06:12 -- target/abort.sh@38 -- # nvmftestfini 00:11:14.170 18:06:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:14.170 18:06:12 -- nvmf/common.sh@116 -- # sync 00:11:14.170 18:06:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:14.170 18:06:12 -- nvmf/common.sh@119 -- # set +e 00:11:14.170 18:06:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:14.170 18:06:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:14.170 rmmod nvme_tcp 00:11:14.428 rmmod nvme_fabrics 00:11:14.428 rmmod nvme_keyring 00:11:14.428 18:06:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:14.428 18:06:12 -- nvmf/common.sh@123 -- # set -e 00:11:14.428 18:06:12 -- nvmf/common.sh@124 -- # return 0 00:11:14.428 18:06:12 -- nvmf/common.sh@477 -- # '[' -n 66819 ']' 00:11:14.428 18:06:12 -- nvmf/common.sh@478 -- # killprocess 66819 00:11:14.428 18:06:12 -- common/autotest_common.sh@926 -- # '[' -z 66819 ']' 00:11:14.428 18:06:12 -- common/autotest_common.sh@930 -- # kill -0 66819 00:11:14.428 18:06:12 -- common/autotest_common.sh@931 -- # uname 00:11:14.428 18:06:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.428 18:06:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66819 00:11:14.428 killing process with pid 66819 00:11:14.428 18:06:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:14.428 18:06:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:14.428 18:06:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66819' 00:11:14.428 18:06:12 -- common/autotest_common.sh@945 -- # kill 66819 00:11:14.428 18:06:12 -- common/autotest_common.sh@950 -- # wait 66819 00:11:14.686 18:06:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:14.686 18:06:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:14.686 18:06:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:14.686 18:06:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.686 18:06:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:14.686 18:06:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.686 18:06:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.686 18:06:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.686 18:06:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:14.686 00:11:14.686 real 0m4.286s 00:11:14.686 user 0m12.445s 00:11:14.686 sys 0m0.992s 00:11:14.686 18:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.686 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:14.686 ************************************ 00:11:14.686 END TEST nvmf_abort 00:11:14.686 ************************************ 00:11:14.686 18:06:12 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:14.686 18:06:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.686 18:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.686 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:14.686 ************************************ 00:11:14.686 START TEST nvmf_ns_hotplug_stress 00:11:14.686 ************************************ 00:11:14.686 18:06:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:14.945 * Looking for test storage... 00:11:14.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.945 18:06:12 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.945 18:06:12 -- nvmf/common.sh@7 -- # uname -s 00:11:14.945 18:06:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.945 18:06:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.945 18:06:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.945 18:06:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.945 18:06:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.945 18:06:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.945 18:06:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.945 18:06:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.945 18:06:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.945 18:06:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.945 18:06:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:14.945 18:06:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:14.945 18:06:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.945 18:06:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.945 18:06:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.945 18:06:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.945 18:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.945 18:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.945 18:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.946 18:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.946 18:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.946 18:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.946 18:06:12 -- paths/export.sh@5 -- # export PATH 00:11:14.946 18:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.946 18:06:12 -- nvmf/common.sh@46 -- # : 0 00:11:14.946 18:06:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:14.946 18:06:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:14.946 18:06:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:14.946 18:06:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.946 18:06:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.946 18:06:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:14.946 18:06:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:14.946 18:06:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:14.946 18:06:12 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.946 18:06:12 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:14.946 18:06:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:14.946 18:06:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.946 18:06:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:14.946 18:06:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:14.946 18:06:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:14.946 18:06:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.946 18:06:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.946 18:06:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.946 18:06:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:14.946 18:06:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:14.946 18:06:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:14.946 18:06:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:14.946 18:06:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:14.946 18:06:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:14.946 18:06:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.946 18:06:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.946 18:06:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:14.946 18:06:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:14.946 18:06:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.946 18:06:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.946 18:06:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.946 18:06:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.946 18:06:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.946 18:06:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.946 18:06:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.946 18:06:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.946 18:06:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:14.946 18:06:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:14.946 Cannot find device "nvmf_tgt_br" 00:11:14.946 18:06:12 -- nvmf/common.sh@154 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.946 Cannot find device "nvmf_tgt_br2" 00:11:14.946 18:06:12 -- nvmf/common.sh@155 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:14.946 18:06:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:14.946 Cannot find device "nvmf_tgt_br" 00:11:14.946 18:06:12 -- nvmf/common.sh@157 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:14.946 Cannot find device "nvmf_tgt_br2" 00:11:14.946 18:06:12 -- nvmf/common.sh@158 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:14.946 18:06:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:14.946 18:06:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.946 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.946 18:06:12 -- nvmf/common.sh@161 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.946 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.946 18:06:12 -- nvmf/common.sh@162 -- # true 00:11:14.946 18:06:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.946 18:06:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.946 18:06:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.946 18:06:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:14.946 18:06:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:14.946 18:06:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:14.946 18:06:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:14.946 18:06:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:14.946 18:06:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:14.946 18:06:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:14.946 18:06:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:14.946 18:06:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:15.205 18:06:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:15.205 18:06:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.205 18:06:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.205 18:06:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.205 18:06:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:15.205 18:06:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:15.205 18:06:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.205 18:06:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.205 18:06:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.205 18:06:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.205 18:06:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.205 18:06:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:15.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:15.205 00:11:15.205 --- 10.0.0.2 ping statistics --- 00:11:15.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.205 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:15.205 18:06:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:15.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:11:15.205 00:11:15.205 --- 10.0.0.3 ping statistics --- 00:11:15.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.205 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:15.205 18:06:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:15.205 00:11:15.205 --- 10.0.0.1 ping statistics --- 00:11:15.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.205 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:15.205 18:06:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.205 18:06:12 -- nvmf/common.sh@421 -- # return 0 00:11:15.205 18:06:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:15.205 18:06:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.205 18:06:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:15.205 18:06:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:15.205 18:06:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.205 18:06:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:15.205 18:06:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:15.205 18:06:12 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:15.205 18:06:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:15.205 18:06:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:15.205 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:15.205 18:06:12 -- nvmf/common.sh@469 -- # nvmfpid=67081 00:11:15.205 18:06:12 -- nvmf/common.sh@470 -- # waitforlisten 67081 00:11:15.205 18:06:12 -- common/autotest_common.sh@819 -- # '[' -z 67081 ']' 00:11:15.205 18:06:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:15.205 18:06:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.205 18:06:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:15.205 18:06:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.205 18:06:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:15.205 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:15.205 [2024-04-25 18:06:13.052242] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:15.205 [2024-04-25 18:06:13.052349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.464 [2024-04-25 18:06:13.201914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.464 [2024-04-25 18:06:13.303319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.464 [2024-04-25 18:06:13.303502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.464 [2024-04-25 18:06:13.303518] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.464 [2024-04-25 18:06:13.303529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.464 [2024-04-25 18:06:13.304178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.464 [2024-04-25 18:06:13.304424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.464 [2024-04-25 18:06:13.304439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.030 18:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.030 18:06:13 -- common/autotest_common.sh@852 -- # return 0 00:11:16.030 18:06:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:16.030 18:06:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:16.030 18:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:16.288 18:06:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.288 18:06:13 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:16.288 18:06:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:16.546 [2024-04-25 18:06:14.259377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.546 18:06:14 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:16.804 18:06:14 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.804 [2024-04-25 18:06:14.732257] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.062 18:06:14 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:17.062 18:06:14 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:17.321 Malloc0 00:11:17.321 18:06:15 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:17.578 Delay0 00:11:17.578 18:06:15 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.836 18:06:15 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:18.095 NULL1 00:11:18.095 18:06:15 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:18.354 18:06:16 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=67212 00:11:18.354 18:06:16 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:18.354 18:06:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:18.354 18:06:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.763 Read completed with error (sct=0, sc=11) 00:11:19.763 18:06:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.022 18:06:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:11:20.022 18:06:17 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:20.022 true 00:11:20.280 18:06:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:20.280 18:06:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.844 18:06:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.101 18:06:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:11:21.101 18:06:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:21.362 true 00:11:21.362 18:06:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:21.362 18:06:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.620 18:06:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.878 18:06:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:11:21.878 18:06:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:22.137 true 00:11:22.137 18:06:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:22.137 18:06:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.074 18:06:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.332 18:06:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:11:23.332 18:06:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:23.332 true 00:11:23.591 18:06:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:23.591 18:06:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.591 18:06:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.850 18:06:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:11:23.850 18:06:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:24.109 true 00:11:24.109 18:06:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:24.109 18:06:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.046 18:06:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.305 18:06:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:11:25.305 18:06:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:25.305 true 00:11:25.563 18:06:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:25.563 18:06:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.822 18:06:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.822 18:06:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:11:25.822 18:06:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:26.081 true 00:11:26.081 18:06:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:26.081 18:06:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.017 18:06:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.275 18:06:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:11:27.275 18:06:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:27.275 true 00:11:27.275 18:06:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:27.275 18:06:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.534 18:06:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.792 18:06:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:11:27.792 18:06:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:28.051 true 00:11:28.051 18:06:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:28.051 18:06:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.988 18:06:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.246 18:06:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:11:29.246 18:06:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:29.504 true 00:11:29.504 18:06:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:29.504 18:06:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.763 18:06:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.021 18:06:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:11:30.021 18:06:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:30.280 true 00:11:30.280 18:06:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:30.280 18:06:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.217 18:06:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.217 18:06:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:11:31.217 18:06:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:31.476 true 00:11:31.476 18:06:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:31.476 18:06:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.735 18:06:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.735 18:06:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:11:31.735 18:06:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:31.993 true 00:11:31.993 18:06:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:31.993 18:06:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.931 18:06:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.190 18:06:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:11:33.190 18:06:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:33.448 true 00:11:33.448 18:06:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:33.448 18:06:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.707 18:06:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.965 18:06:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:11:33.965 18:06:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:34.224 true 00:11:34.224 18:06:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:34.224 18:06:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.159 18:06:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.159 18:06:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:11:35.159 18:06:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:35.416 true 00:11:35.416 18:06:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:35.416 18:06:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.674 18:06:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.932 18:06:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:11:35.932 18:06:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:36.190 true 00:11:36.190 18:06:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:36.190 18:06:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.124 18:06:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.383 18:06:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:11:37.383 18:06:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:37.383 true 00:11:37.383 18:06:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:37.383 18:06:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.642 18:06:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.900 18:06:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:11:37.900 18:06:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:38.159 true 00:11:38.159 18:06:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:38.159 18:06:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.095 18:06:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.354 18:06:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:11:39.354 18:06:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:39.613 true 00:11:39.613 18:06:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:39.613 18:06:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.872 18:06:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.130 18:06:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:11:40.130 18:06:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:40.130 true 00:11:40.130 18:06:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:40.130 18:06:38 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.065 18:06:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.324 18:06:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:11:41.324 18:06:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:41.583 true 00:11:41.583 18:06:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:41.583 18:06:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.841 18:06:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.842 18:06:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:11:41.842 18:06:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:42.101 true 00:11:42.101 18:06:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:42.101 18:06:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.037 18:06:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.295 18:06:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:11:43.296 18:06:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:43.554 true 00:11:43.554 18:06:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:43.554 18:06:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.812 18:06:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.070 18:06:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:11:44.070 18:06:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:44.328 true 00:11:44.329 18:06:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:44.329 18:06:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.587 18:06:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.845 18:06:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:11:44.845 18:06:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:45.104 true 00:11:45.104 18:06:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:45.104 18:06:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.054 18:06:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.312 18:06:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:11:46.312 18:06:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:46.570 true 00:11:46.570 18:06:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:46.570 18:06:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.508 18:06:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.508 18:06:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:11:47.508 18:06:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:47.766 true 00:11:47.766 18:06:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:47.766 18:06:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.024 18:06:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.282 18:06:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:11:48.282 18:06:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:48.541 true 00:11:48.541 18:06:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:48.541 18:06:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.500 Initializing NVMe Controllers 00:11:49.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.500 Controller IO queue size 128, less than required. 00:11:49.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.501 Controller IO queue size 128, less than required. 00:11:49.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:49.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:49.501 Initialization complete. Launching workers. 00:11:49.501 ======================================================== 00:11:49.501 Latency(us) 00:11:49.501 Device Information : IOPS MiB/s Average min max 00:11:49.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 526.63 0.26 141275.00 3043.38 1124699.36 00:11:49.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12111.17 5.91 10568.78 2653.47 533818.83 00:11:49.501 ======================================================== 00:11:49.501 Total : 12637.80 6.17 16015.44 2653.47 1124699.36 00:11:49.501 00:11:49.501 18:06:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.501 18:06:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:11:49.501 18:06:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:49.759 true 00:11:49.759 18:06:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67212 00:11:49.759 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (67212) - No such process 00:11:49.759 18:06:47 -- target/ns_hotplug_stress.sh@44 -- # wait 67212 00:11:49.759 18:06:47 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:49.759 18:06:47 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:11:49.759 18:06:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:49.759 18:06:47 -- nvmf/common.sh@116 -- # sync 00:11:49.759 18:06:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:49.759 18:06:47 -- nvmf/common.sh@119 -- # set +e 00:11:49.759 18:06:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:49.759 18:06:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:49.759 rmmod nvme_tcp 00:11:49.759 rmmod nvme_fabrics 00:11:49.759 rmmod nvme_keyring 00:11:50.017 18:06:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:50.017 18:06:47 -- nvmf/common.sh@123 -- # set -e 00:11:50.017 18:06:47 -- nvmf/common.sh@124 -- # return 0 00:11:50.017 18:06:47 -- nvmf/common.sh@477 -- # '[' -n 67081 ']' 00:11:50.018 18:06:47 -- nvmf/common.sh@478 -- # killprocess 67081 00:11:50.018 18:06:47 -- common/autotest_common.sh@926 -- # '[' -z 67081 ']' 00:11:50.018 18:06:47 -- common/autotest_common.sh@930 -- # kill -0 67081 00:11:50.018 18:06:47 -- common/autotest_common.sh@931 -- # uname 00:11:50.018 18:06:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:50.018 18:06:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67081 00:11:50.018 killing process with pid 67081 00:11:50.018 18:06:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:50.018 18:06:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:50.018 18:06:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67081' 00:11:50.018 18:06:47 -- common/autotest_common.sh@945 -- # kill 67081 00:11:50.018 18:06:47 -- common/autotest_common.sh@950 -- # wait 67081 00:11:50.276 18:06:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:50.276 18:06:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:50.276 18:06:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:50.276 18:06:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.276 18:06:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:50.276 18:06:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.276 18:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.276 18:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.276 18:06:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:50.276 00:11:50.276 real 0m35.570s 00:11:50.276 user 2m30.455s 00:11:50.276 sys 0m7.725s 00:11:50.276 18:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.276 18:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:50.276 ************************************ 00:11:50.276 END TEST nvmf_ns_hotplug_stress 00:11:50.276 ************************************ 00:11:50.276 18:06:48 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.276 18:06:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:50.276 18:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.276 18:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:50.276 ************************************ 00:11:50.276 START TEST nvmf_connect_stress 00:11:50.276 ************************************ 00:11:50.276 18:06:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.535 * Looking for test storage... 00:11:50.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:50.535 18:06:48 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:50.535 18:06:48 -- nvmf/common.sh@7 -- # uname -s 00:11:50.535 18:06:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.535 18:06:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.535 18:06:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.535 18:06:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.535 18:06:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.535 18:06:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.535 18:06:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.535 18:06:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.535 18:06:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.535 18:06:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:50.535 18:06:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:11:50.535 18:06:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.535 18:06:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.535 18:06:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:50.535 18:06:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.535 18:06:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.535 18:06:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.535 18:06:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.535 18:06:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.535 18:06:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.535 18:06:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.535 18:06:48 -- paths/export.sh@5 -- # export PATH 00:11:50.535 18:06:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.535 18:06:48 -- nvmf/common.sh@46 -- # : 0 00:11:50.535 18:06:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:50.535 18:06:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:50.535 18:06:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:50.535 18:06:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.535 18:06:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.535 18:06:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:50.535 18:06:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:50.535 18:06:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:50.535 18:06:48 -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:50.535 18:06:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:50.535 18:06:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.535 18:06:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:50.535 18:06:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:50.535 18:06:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:50.535 18:06:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.535 18:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.535 18:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.535 18:06:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:50.535 18:06:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:50.535 18:06:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.535 18:06:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.535 18:06:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:50.535 18:06:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:50.535 18:06:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:50.535 18:06:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:50.535 18:06:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:50.535 18:06:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.535 18:06:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:50.535 18:06:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:50.535 18:06:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:50.535 18:06:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:50.536 18:06:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:50.536 18:06:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:50.536 Cannot find device "nvmf_tgt_br" 00:11:50.536 18:06:48 -- nvmf/common.sh@154 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.536 Cannot find device "nvmf_tgt_br2" 00:11:50.536 18:06:48 -- nvmf/common.sh@155 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:50.536 18:06:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:50.536 Cannot find device "nvmf_tgt_br" 00:11:50.536 18:06:48 -- nvmf/common.sh@157 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:50.536 Cannot find device "nvmf_tgt_br2" 00:11:50.536 18:06:48 -- nvmf/common.sh@158 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:50.536 18:06:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:50.536 18:06:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.536 18:06:48 -- nvmf/common.sh@161 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:50.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.536 18:06:48 -- nvmf/common.sh@162 -- # true 00:11:50.536 18:06:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:50.536 18:06:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:50.536 18:06:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:50.536 18:06:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:50.536 18:06:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:50.536 18:06:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:50.536 18:06:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:50.536 18:06:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:50.536 18:06:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:50.794 18:06:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:50.794 18:06:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:50.794 18:06:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:50.794 18:06:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:50.794 18:06:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:50.794 18:06:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:50.794 18:06:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:50.794 18:06:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:50.794 18:06:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:50.794 18:06:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:50.794 18:06:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:50.794 18:06:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:50.794 18:06:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:50.794 18:06:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:50.794 18:06:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:50.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:11:50.794 00:11:50.794 --- 10.0.0.2 ping statistics --- 00:11:50.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.794 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:50.794 18:06:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:50.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:11:50.794 00:11:50.794 --- 10.0.0.3 ping statistics --- 00:11:50.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.794 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:50.794 18:06:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:50.794 00:11:50.794 --- 10.0.0.1 ping statistics --- 00:11:50.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.794 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:50.794 18:06:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.794 18:06:48 -- nvmf/common.sh@421 -- # return 0 00:11:50.794 18:06:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:50.794 18:06:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.794 18:06:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:50.794 18:06:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:50.794 18:06:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.794 18:06:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:50.794 18:06:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:50.794 18:06:48 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:50.794 18:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:50.794 18:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:50.794 18:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:50.794 18:06:48 -- nvmf/common.sh@469 -- # nvmfpid=68370 00:11:50.794 18:06:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:50.794 18:06:48 -- nvmf/common.sh@470 -- # waitforlisten 68370 00:11:50.794 18:06:48 -- common/autotest_common.sh@819 -- # '[' -z 68370 ']' 00:11:50.794 18:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.794 18:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:50.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.794 18:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.794 18:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:50.794 18:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:50.794 [2024-04-25 18:06:48.657714] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:50.794 [2024-04-25 18:06:48.657821] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.053 [2024-04-25 18:06:48.800957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.053 [2024-04-25 18:06:48.933017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.053 [2024-04-25 18:06:48.933250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.053 [2024-04-25 18:06:48.933268] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.053 [2024-04-25 18:06:48.933297] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.053 [2024-04-25 18:06:48.933679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.053 [2024-04-25 18:06:48.933695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.053 [2024-04-25 18:06:48.934235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.988 18:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.988 18:06:49 -- common/autotest_common.sh@852 -- # return 0 00:11:51.988 18:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:51.988 18:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:51.988 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.988 18:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.988 18:06:49 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.988 18:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.988 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.988 [2024-04-25 18:06:49.679117] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.988 18:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.988 18:06:49 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.988 18:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.988 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.988 18:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.988 18:06:49 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.988 18:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.988 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.988 [2024-04-25 18:06:49.699278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.988 18:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.988 18:06:49 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:51.988 18:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.988 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.988 NULL1 00:11:51.988 18:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.988 18:06:49 -- target/connect_stress.sh@21 -- # PERF_PID=68422 00:11:51.988 18:06:49 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:51.988 18:06:49 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:51.988 18:06:49 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # seq 1 20 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.988 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.988 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:51.989 18:06:49 -- target/connect_stress.sh@28 -- # cat 00:11:51.989 18:06:49 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:51.989 18:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.989 18:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.989 18:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:52.247 18:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.247 18:06:50 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:52.247 18:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.247 18:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.247 18:06:50 -- common/autotest_common.sh@10 -- # set +x 00:11:52.813 18:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.813 18:06:50 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:52.813 18:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.813 18:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.813 18:06:50 -- common/autotest_common.sh@10 -- # set +x 00:11:53.071 18:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.071 18:06:50 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:53.071 18:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.071 18:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.071 18:06:50 -- common/autotest_common.sh@10 -- # set +x 00:11:53.329 18:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.330 18:06:51 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:53.330 18:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.330 18:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.330 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 18:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.588 18:06:51 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:53.588 18:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.588 18:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.588 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 18:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.846 18:06:51 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:53.846 18:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.846 18:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.846 18:06:51 -- common/autotest_common.sh@10 -- # set +x 00:11:54.413 18:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.413 18:06:52 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:54.413 18:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.413 18:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.413 18:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:54.671 18:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.671 18:06:52 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:54.671 18:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.671 18:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.671 18:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:54.929 18:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.929 18:06:52 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:54.929 18:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.929 18:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.929 18:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 18:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:55.188 18:06:53 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:55.188 18:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.188 18:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:55.188 18:06:53 -- common/autotest_common.sh@10 -- # set +x 00:11:55.445 18:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:55.445 18:06:53 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:55.445 18:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.445 18:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:55.445 18:06:53 -- common/autotest_common.sh@10 -- # set +x 00:11:56.009 18:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.009 18:06:53 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:56.009 18:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.009 18:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.009 18:06:53 -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 18:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.266 18:06:54 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:56.266 18:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.266 18:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.266 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:56.525 18:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.525 18:06:54 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:56.525 18:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.525 18:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.525 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:56.783 18:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.783 18:06:54 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:56.783 18:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.783 18:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.783 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:57.349 18:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.350 18:06:54 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:57.350 18:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.350 18:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.350 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:57.608 18:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.608 18:06:55 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:57.608 18:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.608 18:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.608 18:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:57.866 18:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.866 18:06:55 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:57.866 18:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.866 18:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.866 18:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.124 18:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.124 18:06:55 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:58.124 18:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.124 18:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.124 18:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.383 18:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.383 18:06:56 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:58.383 18:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.383 18:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.383 18:06:56 -- common/autotest_common.sh@10 -- # set +x 00:11:58.949 18:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.949 18:06:56 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:58.949 18:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.949 18:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.949 18:06:56 -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 18:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.208 18:06:56 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:59.208 18:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.208 18:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.208 18:06:56 -- common/autotest_common.sh@10 -- # set +x 00:11:59.477 18:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.477 18:06:57 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:59.477 18:06:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.477 18:06:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.477 18:06:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.748 18:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.749 18:06:57 -- target/connect_stress.sh@34 -- # kill -0 68422 00:11:59.749 18:06:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.749 18:06:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.749 18:06:57 -- common/autotest_common.sh@10 -- # set +x 00:12:00.007 18:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.007 18:06:57 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:00.007 18:06:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.007 18:06:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.007 18:06:57 -- common/autotest_common.sh@10 -- # set +x 00:12:00.573 18:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.573 18:06:58 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:00.573 18:06:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.573 18:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.573 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:00.831 18:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.831 18:06:58 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:00.831 18:06:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.831 18:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.831 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:01.090 18:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.090 18:06:58 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:01.090 18:06:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.090 18:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.090 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:01.348 18:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.348 18:06:59 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:01.348 18:06:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.348 18:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.348 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:12:01.606 18:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.606 18:06:59 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:01.606 18:06:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.606 18:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.606 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:12:02.173 18:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.173 18:06:59 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:02.173 18:06:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.173 18:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.173 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:12:02.173 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:02.432 18:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.432 18:07:00 -- target/connect_stress.sh@34 -- # kill -0 68422 00:12:02.432 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (68422) - No such process 00:12:02.432 18:07:00 -- target/connect_stress.sh@38 -- # wait 68422 00:12:02.432 18:07:00 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:02.432 18:07:00 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:02.432 18:07:00 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:02.432 18:07:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:02.432 18:07:00 -- nvmf/common.sh@116 -- # sync 00:12:02.432 18:07:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:02.432 18:07:00 -- nvmf/common.sh@119 -- # set +e 00:12:02.432 18:07:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:02.432 18:07:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:02.432 rmmod nvme_tcp 00:12:02.432 rmmod nvme_fabrics 00:12:02.432 rmmod nvme_keyring 00:12:02.432 18:07:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:02.432 18:07:00 -- nvmf/common.sh@123 -- # set -e 00:12:02.432 18:07:00 -- nvmf/common.sh@124 -- # return 0 00:12:02.432 18:07:00 -- nvmf/common.sh@477 -- # '[' -n 68370 ']' 00:12:02.432 18:07:00 -- nvmf/common.sh@478 -- # killprocess 68370 00:12:02.432 18:07:00 -- common/autotest_common.sh@926 -- # '[' -z 68370 ']' 00:12:02.432 18:07:00 -- common/autotest_common.sh@930 -- # kill -0 68370 00:12:02.432 18:07:00 -- common/autotest_common.sh@931 -- # uname 00:12:02.432 18:07:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.432 18:07:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68370 00:12:02.432 18:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:02.432 killing process with pid 68370 00:12:02.432 18:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:02.432 18:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68370' 00:12:02.432 18:07:00 -- common/autotest_common.sh@945 -- # kill 68370 00:12:02.432 18:07:00 -- common/autotest_common.sh@950 -- # wait 68370 00:12:02.690 18:07:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:02.690 18:07:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:02.690 18:07:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:02.690 18:07:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.690 18:07:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:02.690 18:07:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.690 18:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.690 18:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.691 18:07:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:02.691 00:12:02.691 real 0m12.357s 00:12:02.691 user 0m41.261s 00:12:02.691 sys 0m3.174s 00:12:02.691 18:07:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.691 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:12:02.691 ************************************ 00:12:02.691 END TEST nvmf_connect_stress 00:12:02.691 ************************************ 00:12:02.691 18:07:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:02.691 18:07:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:02.691 18:07:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.691 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:12:02.691 ************************************ 00:12:02.691 START TEST nvmf_fused_ordering 00:12:02.691 ************************************ 00:12:02.691 18:07:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:02.949 * Looking for test storage... 00:12:02.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.950 18:07:00 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.950 18:07:00 -- nvmf/common.sh@7 -- # uname -s 00:12:02.950 18:07:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.950 18:07:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.950 18:07:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.950 18:07:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.950 18:07:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.950 18:07:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.950 18:07:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.950 18:07:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.950 18:07:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.950 18:07:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:02.950 18:07:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:02.950 18:07:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.950 18:07:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.950 18:07:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.950 18:07:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.950 18:07:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.950 18:07:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.950 18:07:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.950 18:07:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.950 18:07:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.950 18:07:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.950 18:07:00 -- paths/export.sh@5 -- # export PATH 00:12:02.950 18:07:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.950 18:07:00 -- nvmf/common.sh@46 -- # : 0 00:12:02.950 18:07:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:02.950 18:07:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:02.950 18:07:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:02.950 18:07:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.950 18:07:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.950 18:07:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:02.950 18:07:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:02.950 18:07:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:02.950 18:07:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:02.950 18:07:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:02.950 18:07:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.950 18:07:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:02.950 18:07:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:02.950 18:07:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:02.950 18:07:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.950 18:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.950 18:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.950 18:07:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:02.950 18:07:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:02.950 18:07:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.950 18:07:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.950 18:07:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.950 18:07:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:02.950 18:07:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.950 18:07:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.950 18:07:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.950 18:07:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.950 18:07:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.950 18:07:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.950 18:07:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.950 18:07:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.950 18:07:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:02.950 18:07:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:02.950 Cannot find device "nvmf_tgt_br" 00:12:02.950 18:07:00 -- nvmf/common.sh@154 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.950 Cannot find device "nvmf_tgt_br2" 00:12:02.950 18:07:00 -- nvmf/common.sh@155 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:02.950 18:07:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:02.950 Cannot find device "nvmf_tgt_br" 00:12:02.950 18:07:00 -- nvmf/common.sh@157 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:02.950 Cannot find device "nvmf_tgt_br2" 00:12:02.950 18:07:00 -- nvmf/common.sh@158 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:02.950 18:07:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:02.950 18:07:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.950 18:07:00 -- nvmf/common.sh@161 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.950 18:07:00 -- nvmf/common.sh@162 -- # true 00:12:02.950 18:07:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.950 18:07:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.950 18:07:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.950 18:07:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.950 18:07:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.950 18:07:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.950 18:07:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.209 18:07:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:03.209 18:07:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:03.209 18:07:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:03.209 18:07:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:03.209 18:07:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:03.209 18:07:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:03.209 18:07:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.209 18:07:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.209 18:07:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.209 18:07:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:03.209 18:07:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:03.209 18:07:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.209 18:07:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.209 18:07:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.209 18:07:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.209 18:07:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.209 18:07:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:03.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:12:03.209 00:12:03.209 --- 10.0.0.2 ping statistics --- 00:12:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.209 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:03.209 18:07:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:03.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:03.209 00:12:03.209 --- 10.0.0.3 ping statistics --- 00:12:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.209 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:03.209 18:07:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:03.209 00:12:03.209 --- 10.0.0.1 ping statistics --- 00:12:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.209 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:03.209 18:07:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.210 18:07:00 -- nvmf/common.sh@421 -- # return 0 00:12:03.210 18:07:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:03.210 18:07:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.210 18:07:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:03.210 18:07:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:03.210 18:07:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.210 18:07:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:03.210 18:07:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:03.210 18:07:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:03.210 18:07:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:03.210 18:07:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:03.210 18:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:03.210 18:07:01 -- nvmf/common.sh@469 -- # nvmfpid=68742 00:12:03.210 18:07:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.210 18:07:01 -- nvmf/common.sh@470 -- # waitforlisten 68742 00:12:03.210 18:07:01 -- common/autotest_common.sh@819 -- # '[' -z 68742 ']' 00:12:03.210 18:07:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.210 18:07:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:03.210 18:07:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.210 18:07:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:03.210 18:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:03.210 [2024-04-25 18:07:01.073432] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:03.210 [2024-04-25 18:07:01.073513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.469 [2024-04-25 18:07:01.211460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.469 [2024-04-25 18:07:01.321445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.469 [2024-04-25 18:07:01.321610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.469 [2024-04-25 18:07:01.321626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.469 [2024-04-25 18:07:01.321637] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.469 [2024-04-25 18:07:01.321675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.405 18:07:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:04.405 18:07:02 -- common/autotest_common.sh@852 -- # return 0 00:12:04.405 18:07:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:04.405 18:07:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 18:07:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.405 18:07:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 [2024-04-25 18:07:02.060108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 [2024-04-25 18:07:02.076194] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 NULL1 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:04.405 18:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.405 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 18:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.405 18:07:02 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.405 [2024-04-25 18:07:02.127236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:04.405 [2024-04-25 18:07:02.127322] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68796 ] 00:12:04.664 Attached to nqn.2016-06.io.spdk:cnode1 00:12:04.664 Namespace ID: 1 size: 1GB 00:12:04.664 fused_ordering(0) 00:12:04.664 fused_ordering(1) 00:12:04.664 fused_ordering(2) 00:12:04.664 fused_ordering(3) 00:12:04.664 fused_ordering(4) 00:12:04.664 fused_ordering(5) 00:12:04.664 fused_ordering(6) 00:12:04.664 fused_ordering(7) 00:12:04.664 fused_ordering(8) 00:12:04.664 fused_ordering(9) 00:12:04.664 fused_ordering(10) 00:12:04.664 fused_ordering(11) 00:12:04.664 fused_ordering(12) 00:12:04.664 fused_ordering(13) 00:12:04.664 fused_ordering(14) 00:12:04.664 fused_ordering(15) 00:12:04.664 fused_ordering(16) 00:12:04.664 fused_ordering(17) 00:12:04.664 fused_ordering(18) 00:12:04.664 fused_ordering(19) 00:12:04.664 fused_ordering(20) 00:12:04.664 fused_ordering(21) 00:12:04.664 fused_ordering(22) 00:12:04.664 fused_ordering(23) 00:12:04.664 fused_ordering(24) 00:12:04.664 fused_ordering(25) 00:12:04.664 fused_ordering(26) 00:12:04.664 fused_ordering(27) 00:12:04.664 fused_ordering(28) 00:12:04.664 fused_ordering(29) 00:12:04.664 fused_ordering(30) 00:12:04.664 fused_ordering(31) 00:12:04.664 fused_ordering(32) 00:12:04.664 fused_ordering(33) 00:12:04.664 fused_ordering(34) 00:12:04.664 fused_ordering(35) 00:12:04.664 fused_ordering(36) 00:12:04.664 fused_ordering(37) 00:12:04.664 fused_ordering(38) 00:12:04.664 fused_ordering(39) 00:12:04.664 fused_ordering(40) 00:12:04.664 fused_ordering(41) 00:12:04.664 fused_ordering(42) 00:12:04.664 fused_ordering(43) 00:12:04.664 fused_ordering(44) 00:12:04.664 fused_ordering(45) 00:12:04.664 fused_ordering(46) 00:12:04.664 fused_ordering(47) 00:12:04.664 fused_ordering(48) 00:12:04.664 fused_ordering(49) 00:12:04.664 fused_ordering(50) 00:12:04.664 fused_ordering(51) 00:12:04.664 fused_ordering(52) 00:12:04.664 fused_ordering(53) 00:12:04.664 fused_ordering(54) 00:12:04.664 fused_ordering(55) 00:12:04.664 fused_ordering(56) 00:12:04.664 fused_ordering(57) 00:12:04.664 fused_ordering(58) 00:12:04.664 fused_ordering(59) 00:12:04.664 fused_ordering(60) 00:12:04.664 fused_ordering(61) 00:12:04.664 fused_ordering(62) 00:12:04.664 fused_ordering(63) 00:12:04.664 fused_ordering(64) 00:12:04.664 fused_ordering(65) 00:12:04.664 fused_ordering(66) 00:12:04.664 fused_ordering(67) 00:12:04.664 fused_ordering(68) 00:12:04.664 fused_ordering(69) 00:12:04.664 fused_ordering(70) 00:12:04.664 fused_ordering(71) 00:12:04.664 fused_ordering(72) 00:12:04.664 fused_ordering(73) 00:12:04.664 fused_ordering(74) 00:12:04.664 fused_ordering(75) 00:12:04.664 fused_ordering(76) 00:12:04.664 fused_ordering(77) 00:12:04.664 fused_ordering(78) 00:12:04.664 fused_ordering(79) 00:12:04.664 fused_ordering(80) 00:12:04.664 fused_ordering(81) 00:12:04.664 fused_ordering(82) 00:12:04.664 fused_ordering(83) 00:12:04.664 fused_ordering(84) 00:12:04.664 fused_ordering(85) 00:12:04.664 fused_ordering(86) 00:12:04.664 fused_ordering(87) 00:12:04.664 fused_ordering(88) 00:12:04.664 fused_ordering(89) 00:12:04.664 fused_ordering(90) 00:12:04.664 fused_ordering(91) 00:12:04.664 fused_ordering(92) 00:12:04.664 fused_ordering(93) 00:12:04.664 fused_ordering(94) 00:12:04.664 fused_ordering(95) 00:12:04.664 fused_ordering(96) 00:12:04.664 fused_ordering(97) 00:12:04.664 fused_ordering(98) 00:12:04.664 fused_ordering(99) 00:12:04.664 fused_ordering(100) 00:12:04.664 fused_ordering(101) 00:12:04.664 fused_ordering(102) 00:12:04.664 fused_ordering(103) 00:12:04.664 fused_ordering(104) 00:12:04.664 fused_ordering(105) 00:12:04.664 fused_ordering(106) 00:12:04.664 fused_ordering(107) 00:12:04.664 fused_ordering(108) 00:12:04.664 fused_ordering(109) 00:12:04.664 fused_ordering(110) 00:12:04.664 fused_ordering(111) 00:12:04.664 fused_ordering(112) 00:12:04.664 fused_ordering(113) 00:12:04.664 fused_ordering(114) 00:12:04.664 fused_ordering(115) 00:12:04.664 fused_ordering(116) 00:12:04.664 fused_ordering(117) 00:12:04.664 fused_ordering(118) 00:12:04.664 fused_ordering(119) 00:12:04.664 fused_ordering(120) 00:12:04.664 fused_ordering(121) 00:12:04.664 fused_ordering(122) 00:12:04.664 fused_ordering(123) 00:12:04.664 fused_ordering(124) 00:12:04.664 fused_ordering(125) 00:12:04.664 fused_ordering(126) 00:12:04.664 fused_ordering(127) 00:12:04.664 fused_ordering(128) 00:12:04.664 fused_ordering(129) 00:12:04.664 fused_ordering(130) 00:12:04.664 fused_ordering(131) 00:12:04.664 fused_ordering(132) 00:12:04.664 fused_ordering(133) 00:12:04.664 fused_ordering(134) 00:12:04.664 fused_ordering(135) 00:12:04.664 fused_ordering(136) 00:12:04.664 fused_ordering(137) 00:12:04.664 fused_ordering(138) 00:12:04.664 fused_ordering(139) 00:12:04.664 fused_ordering(140) 00:12:04.664 fused_ordering(141) 00:12:04.664 fused_ordering(142) 00:12:04.664 fused_ordering(143) 00:12:04.664 fused_ordering(144) 00:12:04.664 fused_ordering(145) 00:12:04.664 fused_ordering(146) 00:12:04.664 fused_ordering(147) 00:12:04.664 fused_ordering(148) 00:12:04.664 fused_ordering(149) 00:12:04.664 fused_ordering(150) 00:12:04.664 fused_ordering(151) 00:12:04.664 fused_ordering(152) 00:12:04.664 fused_ordering(153) 00:12:04.664 fused_ordering(154) 00:12:04.664 fused_ordering(155) 00:12:04.664 fused_ordering(156) 00:12:04.664 fused_ordering(157) 00:12:04.664 fused_ordering(158) 00:12:04.664 fused_ordering(159) 00:12:04.664 fused_ordering(160) 00:12:04.664 fused_ordering(161) 00:12:04.664 fused_ordering(162) 00:12:04.664 fused_ordering(163) 00:12:04.664 fused_ordering(164) 00:12:04.664 fused_ordering(165) 00:12:04.664 fused_ordering(166) 00:12:04.664 fused_ordering(167) 00:12:04.664 fused_ordering(168) 00:12:04.664 fused_ordering(169) 00:12:04.664 fused_ordering(170) 00:12:04.664 fused_ordering(171) 00:12:04.664 fused_ordering(172) 00:12:04.664 fused_ordering(173) 00:12:04.664 fused_ordering(174) 00:12:04.664 fused_ordering(175) 00:12:04.664 fused_ordering(176) 00:12:04.664 fused_ordering(177) 00:12:04.664 fused_ordering(178) 00:12:04.664 fused_ordering(179) 00:12:04.664 fused_ordering(180) 00:12:04.664 fused_ordering(181) 00:12:04.664 fused_ordering(182) 00:12:04.664 fused_ordering(183) 00:12:04.664 fused_ordering(184) 00:12:04.664 fused_ordering(185) 00:12:04.664 fused_ordering(186) 00:12:04.664 fused_ordering(187) 00:12:04.664 fused_ordering(188) 00:12:04.664 fused_ordering(189) 00:12:04.664 fused_ordering(190) 00:12:04.664 fused_ordering(191) 00:12:04.664 fused_ordering(192) 00:12:04.664 fused_ordering(193) 00:12:04.664 fused_ordering(194) 00:12:04.664 fused_ordering(195) 00:12:04.664 fused_ordering(196) 00:12:04.664 fused_ordering(197) 00:12:04.664 fused_ordering(198) 00:12:04.664 fused_ordering(199) 00:12:04.664 fused_ordering(200) 00:12:04.664 fused_ordering(201) 00:12:04.664 fused_ordering(202) 00:12:04.664 fused_ordering(203) 00:12:04.664 fused_ordering(204) 00:12:04.664 fused_ordering(205) 00:12:04.923 fused_ordering(206) 00:12:04.923 fused_ordering(207) 00:12:04.923 fused_ordering(208) 00:12:04.923 fused_ordering(209) 00:12:04.923 fused_ordering(210) 00:12:04.923 fused_ordering(211) 00:12:04.923 fused_ordering(212) 00:12:04.923 fused_ordering(213) 00:12:04.923 fused_ordering(214) 00:12:04.923 fused_ordering(215) 00:12:04.923 fused_ordering(216) 00:12:04.923 fused_ordering(217) 00:12:04.923 fused_ordering(218) 00:12:04.923 fused_ordering(219) 00:12:04.923 fused_ordering(220) 00:12:04.923 fused_ordering(221) 00:12:04.923 fused_ordering(222) 00:12:04.923 fused_ordering(223) 00:12:04.923 fused_ordering(224) 00:12:04.923 fused_ordering(225) 00:12:04.923 fused_ordering(226) 00:12:04.923 fused_ordering(227) 00:12:04.923 fused_ordering(228) 00:12:04.923 fused_ordering(229) 00:12:04.923 fused_ordering(230) 00:12:04.923 fused_ordering(231) 00:12:04.923 fused_ordering(232) 00:12:04.923 fused_ordering(233) 00:12:04.923 fused_ordering(234) 00:12:04.923 fused_ordering(235) 00:12:04.923 fused_ordering(236) 00:12:04.923 fused_ordering(237) 00:12:04.923 fused_ordering(238) 00:12:04.923 fused_ordering(239) 00:12:04.923 fused_ordering(240) 00:12:04.923 fused_ordering(241) 00:12:04.923 fused_ordering(242) 00:12:04.923 fused_ordering(243) 00:12:04.923 fused_ordering(244) 00:12:04.923 fused_ordering(245) 00:12:04.923 fused_ordering(246) 00:12:04.923 fused_ordering(247) 00:12:04.923 fused_ordering(248) 00:12:04.923 fused_ordering(249) 00:12:04.923 fused_ordering(250) 00:12:04.923 fused_ordering(251) 00:12:04.923 fused_ordering(252) 00:12:04.923 fused_ordering(253) 00:12:04.923 fused_ordering(254) 00:12:04.923 fused_ordering(255) 00:12:04.923 fused_ordering(256) 00:12:04.923 fused_ordering(257) 00:12:04.923 fused_ordering(258) 00:12:04.923 fused_ordering(259) 00:12:04.923 fused_ordering(260) 00:12:04.923 fused_ordering(261) 00:12:04.923 fused_ordering(262) 00:12:04.923 fused_ordering(263) 00:12:04.923 fused_ordering(264) 00:12:04.923 fused_ordering(265) 00:12:04.923 fused_ordering(266) 00:12:04.923 fused_ordering(267) 00:12:04.923 fused_ordering(268) 00:12:04.923 fused_ordering(269) 00:12:04.923 fused_ordering(270) 00:12:04.923 fused_ordering(271) 00:12:04.923 fused_ordering(272) 00:12:04.923 fused_ordering(273) 00:12:04.923 fused_ordering(274) 00:12:04.923 fused_ordering(275) 00:12:04.923 fused_ordering(276) 00:12:04.923 fused_ordering(277) 00:12:04.923 fused_ordering(278) 00:12:04.923 fused_ordering(279) 00:12:04.923 fused_ordering(280) 00:12:04.923 fused_ordering(281) 00:12:04.923 fused_ordering(282) 00:12:04.923 fused_ordering(283) 00:12:04.923 fused_ordering(284) 00:12:04.924 fused_ordering(285) 00:12:04.924 fused_ordering(286) 00:12:04.924 fused_ordering(287) 00:12:04.924 fused_ordering(288) 00:12:04.924 fused_ordering(289) 00:12:04.924 fused_ordering(290) 00:12:04.924 fused_ordering(291) 00:12:04.924 fused_ordering(292) 00:12:04.924 fused_ordering(293) 00:12:04.924 fused_ordering(294) 00:12:04.924 fused_ordering(295) 00:12:04.924 fused_ordering(296) 00:12:04.924 fused_ordering(297) 00:12:04.924 fused_ordering(298) 00:12:04.924 fused_ordering(299) 00:12:04.924 fused_ordering(300) 00:12:04.924 fused_ordering(301) 00:12:04.924 fused_ordering(302) 00:12:04.924 fused_ordering(303) 00:12:04.924 fused_ordering(304) 00:12:04.924 fused_ordering(305) 00:12:04.924 fused_ordering(306) 00:12:04.924 fused_ordering(307) 00:12:04.924 fused_ordering(308) 00:12:04.924 fused_ordering(309) 00:12:04.924 fused_ordering(310) 00:12:04.924 fused_ordering(311) 00:12:04.924 fused_ordering(312) 00:12:04.924 fused_ordering(313) 00:12:04.924 fused_ordering(314) 00:12:04.924 fused_ordering(315) 00:12:04.924 fused_ordering(316) 00:12:04.924 fused_ordering(317) 00:12:04.924 fused_ordering(318) 00:12:04.924 fused_ordering(319) 00:12:04.924 fused_ordering(320) 00:12:04.924 fused_ordering(321) 00:12:04.924 fused_ordering(322) 00:12:04.924 fused_ordering(323) 00:12:04.924 fused_ordering(324) 00:12:04.924 fused_ordering(325) 00:12:04.924 fused_ordering(326) 00:12:04.924 fused_ordering(327) 00:12:04.924 fused_ordering(328) 00:12:04.924 fused_ordering(329) 00:12:04.924 fused_ordering(330) 00:12:04.924 fused_ordering(331) 00:12:04.924 fused_ordering(332) 00:12:04.924 fused_ordering(333) 00:12:04.924 fused_ordering(334) 00:12:04.924 fused_ordering(335) 00:12:04.924 fused_ordering(336) 00:12:04.924 fused_ordering(337) 00:12:04.924 fused_ordering(338) 00:12:04.924 fused_ordering(339) 00:12:04.924 fused_ordering(340) 00:12:04.924 fused_ordering(341) 00:12:04.924 fused_ordering(342) 00:12:04.924 fused_ordering(343) 00:12:04.924 fused_ordering(344) 00:12:04.924 fused_ordering(345) 00:12:04.924 fused_ordering(346) 00:12:04.924 fused_ordering(347) 00:12:04.924 fused_ordering(348) 00:12:04.924 fused_ordering(349) 00:12:04.924 fused_ordering(350) 00:12:04.924 fused_ordering(351) 00:12:04.924 fused_ordering(352) 00:12:04.924 fused_ordering(353) 00:12:04.924 fused_ordering(354) 00:12:04.924 fused_ordering(355) 00:12:04.924 fused_ordering(356) 00:12:04.924 fused_ordering(357) 00:12:04.924 fused_ordering(358) 00:12:04.924 fused_ordering(359) 00:12:04.924 fused_ordering(360) 00:12:04.924 fused_ordering(361) 00:12:04.924 fused_ordering(362) 00:12:04.924 fused_ordering(363) 00:12:04.924 fused_ordering(364) 00:12:04.924 fused_ordering(365) 00:12:04.924 fused_ordering(366) 00:12:04.924 fused_ordering(367) 00:12:04.924 fused_ordering(368) 00:12:04.924 fused_ordering(369) 00:12:04.924 fused_ordering(370) 00:12:04.924 fused_ordering(371) 00:12:04.924 fused_ordering(372) 00:12:04.924 fused_ordering(373) 00:12:04.924 fused_ordering(374) 00:12:04.924 fused_ordering(375) 00:12:04.924 fused_ordering(376) 00:12:04.924 fused_ordering(377) 00:12:04.924 fused_ordering(378) 00:12:04.924 fused_ordering(379) 00:12:04.924 fused_ordering(380) 00:12:04.924 fused_ordering(381) 00:12:04.924 fused_ordering(382) 00:12:04.924 fused_ordering(383) 00:12:04.924 fused_ordering(384) 00:12:04.924 fused_ordering(385) 00:12:04.924 fused_ordering(386) 00:12:04.924 fused_ordering(387) 00:12:04.924 fused_ordering(388) 00:12:04.924 fused_ordering(389) 00:12:04.924 fused_ordering(390) 00:12:04.924 fused_ordering(391) 00:12:04.924 fused_ordering(392) 00:12:04.924 fused_ordering(393) 00:12:04.924 fused_ordering(394) 00:12:04.924 fused_ordering(395) 00:12:04.924 fused_ordering(396) 00:12:04.924 fused_ordering(397) 00:12:04.924 fused_ordering(398) 00:12:04.924 fused_ordering(399) 00:12:04.924 fused_ordering(400) 00:12:04.924 fused_ordering(401) 00:12:04.924 fused_ordering(402) 00:12:04.924 fused_ordering(403) 00:12:04.924 fused_ordering(404) 00:12:04.924 fused_ordering(405) 00:12:04.924 fused_ordering(406) 00:12:04.924 fused_ordering(407) 00:12:04.924 fused_ordering(408) 00:12:04.924 fused_ordering(409) 00:12:04.924 fused_ordering(410) 00:12:05.490 fused_ordering(411) 00:12:05.490 fused_ordering(412) 00:12:05.490 fused_ordering(413) 00:12:05.490 fused_ordering(414) 00:12:05.490 fused_ordering(415) 00:12:05.490 fused_ordering(416) 00:12:05.490 fused_ordering(417) 00:12:05.490 fused_ordering(418) 00:12:05.490 fused_ordering(419) 00:12:05.490 fused_ordering(420) 00:12:05.490 fused_ordering(421) 00:12:05.490 fused_ordering(422) 00:12:05.490 fused_ordering(423) 00:12:05.490 fused_ordering(424) 00:12:05.490 fused_ordering(425) 00:12:05.490 fused_ordering(426) 00:12:05.490 fused_ordering(427) 00:12:05.490 fused_ordering(428) 00:12:05.490 fused_ordering(429) 00:12:05.490 fused_ordering(430) 00:12:05.490 fused_ordering(431) 00:12:05.490 fused_ordering(432) 00:12:05.490 fused_ordering(433) 00:12:05.490 fused_ordering(434) 00:12:05.490 fused_ordering(435) 00:12:05.490 fused_ordering(436) 00:12:05.490 fused_ordering(437) 00:12:05.490 fused_ordering(438) 00:12:05.490 fused_ordering(439) 00:12:05.490 fused_ordering(440) 00:12:05.490 fused_ordering(441) 00:12:05.490 fused_ordering(442) 00:12:05.490 fused_ordering(443) 00:12:05.490 fused_ordering(444) 00:12:05.490 fused_ordering(445) 00:12:05.490 fused_ordering(446) 00:12:05.490 fused_ordering(447) 00:12:05.490 fused_ordering(448) 00:12:05.490 fused_ordering(449) 00:12:05.490 fused_ordering(450) 00:12:05.490 fused_ordering(451) 00:12:05.490 fused_ordering(452) 00:12:05.490 fused_ordering(453) 00:12:05.490 fused_ordering(454) 00:12:05.490 fused_ordering(455) 00:12:05.490 fused_ordering(456) 00:12:05.490 fused_ordering(457) 00:12:05.490 fused_ordering(458) 00:12:05.490 fused_ordering(459) 00:12:05.490 fused_ordering(460) 00:12:05.490 fused_ordering(461) 00:12:05.490 fused_ordering(462) 00:12:05.490 fused_ordering(463) 00:12:05.490 fused_ordering(464) 00:12:05.490 fused_ordering(465) 00:12:05.490 fused_ordering(466) 00:12:05.490 fused_ordering(467) 00:12:05.490 fused_ordering(468) 00:12:05.490 fused_ordering(469) 00:12:05.490 fused_ordering(470) 00:12:05.490 fused_ordering(471) 00:12:05.490 fused_ordering(472) 00:12:05.490 fused_ordering(473) 00:12:05.490 fused_ordering(474) 00:12:05.490 fused_ordering(475) 00:12:05.490 fused_ordering(476) 00:12:05.490 fused_ordering(477) 00:12:05.490 fused_ordering(478) 00:12:05.490 fused_ordering(479) 00:12:05.490 fused_ordering(480) 00:12:05.490 fused_ordering(481) 00:12:05.490 fused_ordering(482) 00:12:05.490 fused_ordering(483) 00:12:05.490 fused_ordering(484) 00:12:05.490 fused_ordering(485) 00:12:05.490 fused_ordering(486) 00:12:05.490 fused_ordering(487) 00:12:05.490 fused_ordering(488) 00:12:05.490 fused_ordering(489) 00:12:05.490 fused_ordering(490) 00:12:05.490 fused_ordering(491) 00:12:05.490 fused_ordering(492) 00:12:05.490 fused_ordering(493) 00:12:05.490 fused_ordering(494) 00:12:05.490 fused_ordering(495) 00:12:05.490 fused_ordering(496) 00:12:05.490 fused_ordering(497) 00:12:05.490 fused_ordering(498) 00:12:05.490 fused_ordering(499) 00:12:05.490 fused_ordering(500) 00:12:05.490 fused_ordering(501) 00:12:05.490 fused_ordering(502) 00:12:05.490 fused_ordering(503) 00:12:05.490 fused_ordering(504) 00:12:05.490 fused_ordering(505) 00:12:05.490 fused_ordering(506) 00:12:05.490 fused_ordering(507) 00:12:05.490 fused_ordering(508) 00:12:05.490 fused_ordering(509) 00:12:05.490 fused_ordering(510) 00:12:05.490 fused_ordering(511) 00:12:05.490 fused_ordering(512) 00:12:05.490 fused_ordering(513) 00:12:05.490 fused_ordering(514) 00:12:05.490 fused_ordering(515) 00:12:05.490 fused_ordering(516) 00:12:05.490 fused_ordering(517) 00:12:05.490 fused_ordering(518) 00:12:05.490 fused_ordering(519) 00:12:05.490 fused_ordering(520) 00:12:05.490 fused_ordering(521) 00:12:05.490 fused_ordering(522) 00:12:05.490 fused_ordering(523) 00:12:05.490 fused_ordering(524) 00:12:05.490 fused_ordering(525) 00:12:05.490 fused_ordering(526) 00:12:05.490 fused_ordering(527) 00:12:05.490 fused_ordering(528) 00:12:05.490 fused_ordering(529) 00:12:05.490 fused_ordering(530) 00:12:05.490 fused_ordering(531) 00:12:05.490 fused_ordering(532) 00:12:05.490 fused_ordering(533) 00:12:05.490 fused_ordering(534) 00:12:05.490 fused_ordering(535) 00:12:05.490 fused_ordering(536) 00:12:05.490 fused_ordering(537) 00:12:05.490 fused_ordering(538) 00:12:05.490 fused_ordering(539) 00:12:05.490 fused_ordering(540) 00:12:05.490 fused_ordering(541) 00:12:05.490 fused_ordering(542) 00:12:05.490 fused_ordering(543) 00:12:05.490 fused_ordering(544) 00:12:05.490 fused_ordering(545) 00:12:05.490 fused_ordering(546) 00:12:05.490 fused_ordering(547) 00:12:05.490 fused_ordering(548) 00:12:05.490 fused_ordering(549) 00:12:05.490 fused_ordering(550) 00:12:05.490 fused_ordering(551) 00:12:05.490 fused_ordering(552) 00:12:05.490 fused_ordering(553) 00:12:05.490 fused_ordering(554) 00:12:05.490 fused_ordering(555) 00:12:05.490 fused_ordering(556) 00:12:05.490 fused_ordering(557) 00:12:05.490 fused_ordering(558) 00:12:05.490 fused_ordering(559) 00:12:05.490 fused_ordering(560) 00:12:05.490 fused_ordering(561) 00:12:05.490 fused_ordering(562) 00:12:05.490 fused_ordering(563) 00:12:05.490 fused_ordering(564) 00:12:05.490 fused_ordering(565) 00:12:05.490 fused_ordering(566) 00:12:05.490 fused_ordering(567) 00:12:05.490 fused_ordering(568) 00:12:05.490 fused_ordering(569) 00:12:05.490 fused_ordering(570) 00:12:05.490 fused_ordering(571) 00:12:05.490 fused_ordering(572) 00:12:05.490 fused_ordering(573) 00:12:05.490 fused_ordering(574) 00:12:05.490 fused_ordering(575) 00:12:05.490 fused_ordering(576) 00:12:05.490 fused_ordering(577) 00:12:05.490 fused_ordering(578) 00:12:05.490 fused_ordering(579) 00:12:05.490 fused_ordering(580) 00:12:05.490 fused_ordering(581) 00:12:05.490 fused_ordering(582) 00:12:05.490 fused_ordering(583) 00:12:05.490 fused_ordering(584) 00:12:05.490 fused_ordering(585) 00:12:05.490 fused_ordering(586) 00:12:05.490 fused_ordering(587) 00:12:05.490 fused_ordering(588) 00:12:05.490 fused_ordering(589) 00:12:05.490 fused_ordering(590) 00:12:05.490 fused_ordering(591) 00:12:05.490 fused_ordering(592) 00:12:05.490 fused_ordering(593) 00:12:05.490 fused_ordering(594) 00:12:05.490 fused_ordering(595) 00:12:05.490 fused_ordering(596) 00:12:05.490 fused_ordering(597) 00:12:05.490 fused_ordering(598) 00:12:05.490 fused_ordering(599) 00:12:05.490 fused_ordering(600) 00:12:05.490 fused_ordering(601) 00:12:05.490 fused_ordering(602) 00:12:05.490 fused_ordering(603) 00:12:05.490 fused_ordering(604) 00:12:05.490 fused_ordering(605) 00:12:05.490 fused_ordering(606) 00:12:05.490 fused_ordering(607) 00:12:05.490 fused_ordering(608) 00:12:05.490 fused_ordering(609) 00:12:05.490 fused_ordering(610) 00:12:05.490 fused_ordering(611) 00:12:05.490 fused_ordering(612) 00:12:05.490 fused_ordering(613) 00:12:05.490 fused_ordering(614) 00:12:05.490 fused_ordering(615) 00:12:05.748 fused_ordering(616) 00:12:05.748 fused_ordering(617) 00:12:05.748 fused_ordering(618) 00:12:05.748 fused_ordering(619) 00:12:05.748 fused_ordering(620) 00:12:05.748 fused_ordering(621) 00:12:05.748 fused_ordering(622) 00:12:05.748 fused_ordering(623) 00:12:05.748 fused_ordering(624) 00:12:05.748 fused_ordering(625) 00:12:05.748 fused_ordering(626) 00:12:05.748 fused_ordering(627) 00:12:05.748 fused_ordering(628) 00:12:05.748 fused_ordering(629) 00:12:05.748 fused_ordering(630) 00:12:05.748 fused_ordering(631) 00:12:05.748 fused_ordering(632) 00:12:05.748 fused_ordering(633) 00:12:05.748 fused_ordering(634) 00:12:05.748 fused_ordering(635) 00:12:05.748 fused_ordering(636) 00:12:05.748 fused_ordering(637) 00:12:05.748 fused_ordering(638) 00:12:05.748 fused_ordering(639) 00:12:05.748 fused_ordering(640) 00:12:05.748 fused_ordering(641) 00:12:05.748 fused_ordering(642) 00:12:05.748 fused_ordering(643) 00:12:05.748 fused_ordering(644) 00:12:05.748 fused_ordering(645) 00:12:05.748 fused_ordering(646) 00:12:05.748 fused_ordering(647) 00:12:05.748 fused_ordering(648) 00:12:05.748 fused_ordering(649) 00:12:05.748 fused_ordering(650) 00:12:05.748 fused_ordering(651) 00:12:05.748 fused_ordering(652) 00:12:05.748 fused_ordering(653) 00:12:05.748 fused_ordering(654) 00:12:05.748 fused_ordering(655) 00:12:05.748 fused_ordering(656) 00:12:05.748 fused_ordering(657) 00:12:05.748 fused_ordering(658) 00:12:05.748 fused_ordering(659) 00:12:05.748 fused_ordering(660) 00:12:05.748 fused_ordering(661) 00:12:05.748 fused_ordering(662) 00:12:05.748 fused_ordering(663) 00:12:05.748 fused_ordering(664) 00:12:05.748 fused_ordering(665) 00:12:05.748 fused_ordering(666) 00:12:05.748 fused_ordering(667) 00:12:05.748 fused_ordering(668) 00:12:05.748 fused_ordering(669) 00:12:05.748 fused_ordering(670) 00:12:05.748 fused_ordering(671) 00:12:05.748 fused_ordering(672) 00:12:05.748 fused_ordering(673) 00:12:05.748 fused_ordering(674) 00:12:05.748 fused_ordering(675) 00:12:05.748 fused_ordering(676) 00:12:05.748 fused_ordering(677) 00:12:05.748 fused_ordering(678) 00:12:05.748 fused_ordering(679) 00:12:05.748 fused_ordering(680) 00:12:05.748 fused_ordering(681) 00:12:05.748 fused_ordering(682) 00:12:05.748 fused_ordering(683) 00:12:05.748 fused_ordering(684) 00:12:05.748 fused_ordering(685) 00:12:05.748 fused_ordering(686) 00:12:05.748 fused_ordering(687) 00:12:05.748 fused_ordering(688) 00:12:05.748 fused_ordering(689) 00:12:05.748 fused_ordering(690) 00:12:05.748 fused_ordering(691) 00:12:05.748 fused_ordering(692) 00:12:05.748 fused_ordering(693) 00:12:05.748 fused_ordering(694) 00:12:05.748 fused_ordering(695) 00:12:05.748 fused_ordering(696) 00:12:05.748 fused_ordering(697) 00:12:05.748 fused_ordering(698) 00:12:05.748 fused_ordering(699) 00:12:05.748 fused_ordering(700) 00:12:05.748 fused_ordering(701) 00:12:05.748 fused_ordering(702) 00:12:05.748 fused_ordering(703) 00:12:05.748 fused_ordering(704) 00:12:05.748 fused_ordering(705) 00:12:05.748 fused_ordering(706) 00:12:05.748 fused_ordering(707) 00:12:05.748 fused_ordering(708) 00:12:05.748 fused_ordering(709) 00:12:05.748 fused_ordering(710) 00:12:05.748 fused_ordering(711) 00:12:05.748 fused_ordering(712) 00:12:05.748 fused_ordering(713) 00:12:05.748 fused_ordering(714) 00:12:05.748 fused_ordering(715) 00:12:05.748 fused_ordering(716) 00:12:05.748 fused_ordering(717) 00:12:05.748 fused_ordering(718) 00:12:05.748 fused_ordering(719) 00:12:05.748 fused_ordering(720) 00:12:05.748 fused_ordering(721) 00:12:05.748 fused_ordering(722) 00:12:05.748 fused_ordering(723) 00:12:05.748 fused_ordering(724) 00:12:05.748 fused_ordering(725) 00:12:05.748 fused_ordering(726) 00:12:05.748 fused_ordering(727) 00:12:05.748 fused_ordering(728) 00:12:05.748 fused_ordering(729) 00:12:05.748 fused_ordering(730) 00:12:05.748 fused_ordering(731) 00:12:05.748 fused_ordering(732) 00:12:05.748 fused_ordering(733) 00:12:05.748 fused_ordering(734) 00:12:05.748 fused_ordering(735) 00:12:05.748 fused_ordering(736) 00:12:05.748 fused_ordering(737) 00:12:05.748 fused_ordering(738) 00:12:05.748 fused_ordering(739) 00:12:05.748 fused_ordering(740) 00:12:05.748 fused_ordering(741) 00:12:05.748 fused_ordering(742) 00:12:05.748 fused_ordering(743) 00:12:05.748 fused_ordering(744) 00:12:05.748 fused_ordering(745) 00:12:05.748 fused_ordering(746) 00:12:05.748 fused_ordering(747) 00:12:05.748 fused_ordering(748) 00:12:05.748 fused_ordering(749) 00:12:05.748 fused_ordering(750) 00:12:05.748 fused_ordering(751) 00:12:05.748 fused_ordering(752) 00:12:05.748 fused_ordering(753) 00:12:05.748 fused_ordering(754) 00:12:05.748 fused_ordering(755) 00:12:05.748 fused_ordering(756) 00:12:05.748 fused_ordering(757) 00:12:05.748 fused_ordering(758) 00:12:05.748 fused_ordering(759) 00:12:05.748 fused_ordering(760) 00:12:05.748 fused_ordering(761) 00:12:05.748 fused_ordering(762) 00:12:05.748 fused_ordering(763) 00:12:05.748 fused_ordering(764) 00:12:05.748 fused_ordering(765) 00:12:05.748 fused_ordering(766) 00:12:05.748 fused_ordering(767) 00:12:05.748 fused_ordering(768) 00:12:05.748 fused_ordering(769) 00:12:05.748 fused_ordering(770) 00:12:05.748 fused_ordering(771) 00:12:05.748 fused_ordering(772) 00:12:05.748 fused_ordering(773) 00:12:05.748 fused_ordering(774) 00:12:05.748 fused_ordering(775) 00:12:05.748 fused_ordering(776) 00:12:05.748 fused_ordering(777) 00:12:05.748 fused_ordering(778) 00:12:05.748 fused_ordering(779) 00:12:05.748 fused_ordering(780) 00:12:05.748 fused_ordering(781) 00:12:05.748 fused_ordering(782) 00:12:05.748 fused_ordering(783) 00:12:05.748 fused_ordering(784) 00:12:05.748 fused_ordering(785) 00:12:05.748 fused_ordering(786) 00:12:05.748 fused_ordering(787) 00:12:05.748 fused_ordering(788) 00:12:05.748 fused_ordering(789) 00:12:05.748 fused_ordering(790) 00:12:05.748 fused_ordering(791) 00:12:05.748 fused_ordering(792) 00:12:05.748 fused_ordering(793) 00:12:05.748 fused_ordering(794) 00:12:05.748 fused_ordering(795) 00:12:05.748 fused_ordering(796) 00:12:05.748 fused_ordering(797) 00:12:05.748 fused_ordering(798) 00:12:05.748 fused_ordering(799) 00:12:05.748 fused_ordering(800) 00:12:05.748 fused_ordering(801) 00:12:05.748 fused_ordering(802) 00:12:05.748 fused_ordering(803) 00:12:05.748 fused_ordering(804) 00:12:05.748 fused_ordering(805) 00:12:05.748 fused_ordering(806) 00:12:05.748 fused_ordering(807) 00:12:05.748 fused_ordering(808) 00:12:05.748 fused_ordering(809) 00:12:05.748 fused_ordering(810) 00:12:05.748 fused_ordering(811) 00:12:05.748 fused_ordering(812) 00:12:05.748 fused_ordering(813) 00:12:05.748 fused_ordering(814) 00:12:05.748 fused_ordering(815) 00:12:05.748 fused_ordering(816) 00:12:05.748 fused_ordering(817) 00:12:05.748 fused_ordering(818) 00:12:05.748 fused_ordering(819) 00:12:05.748 fused_ordering(820) 00:12:06.313 fused_ordering(821) 00:12:06.313 fused_ordering(822) 00:12:06.313 fused_ordering(823) 00:12:06.313 fused_ordering(824) 00:12:06.313 fused_ordering(825) 00:12:06.313 fused_ordering(826) 00:12:06.313 fused_ordering(827) 00:12:06.313 fused_ordering(828) 00:12:06.313 fused_ordering(829) 00:12:06.313 fused_ordering(830) 00:12:06.313 fused_ordering(831) 00:12:06.313 fused_ordering(832) 00:12:06.313 fused_ordering(833) 00:12:06.313 fused_ordering(834) 00:12:06.313 fused_ordering(835) 00:12:06.313 fused_ordering(836) 00:12:06.313 fused_ordering(837) 00:12:06.313 fused_ordering(838) 00:12:06.313 fused_ordering(839) 00:12:06.313 fused_ordering(840) 00:12:06.314 fused_ordering(841) 00:12:06.314 fused_ordering(842) 00:12:06.314 fused_ordering(843) 00:12:06.314 fused_ordering(844) 00:12:06.314 fused_ordering(845) 00:12:06.314 fused_ordering(846) 00:12:06.314 fused_ordering(847) 00:12:06.314 fused_ordering(848) 00:12:06.314 fused_ordering(849) 00:12:06.314 fused_ordering(850) 00:12:06.314 fused_ordering(851) 00:12:06.314 fused_ordering(852) 00:12:06.314 fused_ordering(853) 00:12:06.314 fused_ordering(854) 00:12:06.314 fused_ordering(855) 00:12:06.314 fused_ordering(856) 00:12:06.314 fused_ordering(857) 00:12:06.314 fused_ordering(858) 00:12:06.314 fused_ordering(859) 00:12:06.314 fused_ordering(860) 00:12:06.314 fused_ordering(861) 00:12:06.314 fused_ordering(862) 00:12:06.314 fused_ordering(863) 00:12:06.314 fused_ordering(864) 00:12:06.314 fused_ordering(865) 00:12:06.314 fused_ordering(866) 00:12:06.314 fused_ordering(867) 00:12:06.314 fused_ordering(868) 00:12:06.314 fused_ordering(869) 00:12:06.314 fused_ordering(870) 00:12:06.314 fused_ordering(871) 00:12:06.314 fused_ordering(872) 00:12:06.314 fused_ordering(873) 00:12:06.314 fused_ordering(874) 00:12:06.314 fused_ordering(875) 00:12:06.314 fused_ordering(876) 00:12:06.314 fused_ordering(877) 00:12:06.314 fused_ordering(878) 00:12:06.314 fused_ordering(879) 00:12:06.314 fused_ordering(880) 00:12:06.314 fused_ordering(881) 00:12:06.314 fused_ordering(882) 00:12:06.314 fused_ordering(883) 00:12:06.314 fused_ordering(884) 00:12:06.314 fused_ordering(885) 00:12:06.314 fused_ordering(886) 00:12:06.314 fused_ordering(887) 00:12:06.314 fused_ordering(888) 00:12:06.314 fused_ordering(889) 00:12:06.314 fused_ordering(890) 00:12:06.314 fused_ordering(891) 00:12:06.314 fused_ordering(892) 00:12:06.314 fused_ordering(893) 00:12:06.314 fused_ordering(894) 00:12:06.314 fused_ordering(895) 00:12:06.314 fused_ordering(896) 00:12:06.314 fused_ordering(897) 00:12:06.314 fused_ordering(898) 00:12:06.314 fused_ordering(899) 00:12:06.314 fused_ordering(900) 00:12:06.314 fused_ordering(901) 00:12:06.314 fused_ordering(902) 00:12:06.314 fused_ordering(903) 00:12:06.314 fused_ordering(904) 00:12:06.314 fused_ordering(905) 00:12:06.314 fused_ordering(906) 00:12:06.314 fused_ordering(907) 00:12:06.314 fused_ordering(908) 00:12:06.314 fused_ordering(909) 00:12:06.314 fused_ordering(910) 00:12:06.314 fused_ordering(911) 00:12:06.314 fused_ordering(912) 00:12:06.314 fused_ordering(913) 00:12:06.314 fused_ordering(914) 00:12:06.314 fused_ordering(915) 00:12:06.314 fused_ordering(916) 00:12:06.314 fused_ordering(917) 00:12:06.314 fused_ordering(918) 00:12:06.314 fused_ordering(919) 00:12:06.314 fused_ordering(920) 00:12:06.314 fused_ordering(921) 00:12:06.314 fused_ordering(922) 00:12:06.314 fused_ordering(923) 00:12:06.314 fused_ordering(924) 00:12:06.314 fused_ordering(925) 00:12:06.314 fused_ordering(926) 00:12:06.314 fused_ordering(927) 00:12:06.314 fused_ordering(928) 00:12:06.314 fused_ordering(929) 00:12:06.314 fused_ordering(930) 00:12:06.314 fused_ordering(931) 00:12:06.314 fused_ordering(932) 00:12:06.314 fused_ordering(933) 00:12:06.314 fused_ordering(934) 00:12:06.314 fused_ordering(935) 00:12:06.314 fused_ordering(936) 00:12:06.314 fused_ordering(937) 00:12:06.314 fused_ordering(938) 00:12:06.314 fused_ordering(939) 00:12:06.314 fused_ordering(940) 00:12:06.314 fused_ordering(941) 00:12:06.314 fused_ordering(942) 00:12:06.314 fused_ordering(943) 00:12:06.314 fused_ordering(944) 00:12:06.314 fused_ordering(945) 00:12:06.314 fused_ordering(946) 00:12:06.314 fused_ordering(947) 00:12:06.314 fused_ordering(948) 00:12:06.314 fused_ordering(949) 00:12:06.314 fused_ordering(950) 00:12:06.314 fused_ordering(951) 00:12:06.314 fused_ordering(952) 00:12:06.314 fused_ordering(953) 00:12:06.314 fused_ordering(954) 00:12:06.314 fused_ordering(955) 00:12:06.314 fused_ordering(956) 00:12:06.314 fused_ordering(957) 00:12:06.314 fused_ordering(958) 00:12:06.314 fused_ordering(959) 00:12:06.314 fused_ordering(960) 00:12:06.314 fused_ordering(961) 00:12:06.314 fused_ordering(962) 00:12:06.314 fused_ordering(963) 00:12:06.314 fused_ordering(964) 00:12:06.314 fused_ordering(965) 00:12:06.314 fused_ordering(966) 00:12:06.314 fused_ordering(967) 00:12:06.314 fused_ordering(968) 00:12:06.314 fused_ordering(969) 00:12:06.314 fused_ordering(970) 00:12:06.314 fused_ordering(971) 00:12:06.314 fused_ordering(972) 00:12:06.314 fused_ordering(973) 00:12:06.314 fused_ordering(974) 00:12:06.314 fused_ordering(975) 00:12:06.314 fused_ordering(976) 00:12:06.314 fused_ordering(977) 00:12:06.314 fused_ordering(978) 00:12:06.314 fused_ordering(979) 00:12:06.314 fused_ordering(980) 00:12:06.314 fused_ordering(981) 00:12:06.314 fused_ordering(982) 00:12:06.314 fused_ordering(983) 00:12:06.314 fused_ordering(984) 00:12:06.314 fused_ordering(985) 00:12:06.314 fused_ordering(986) 00:12:06.314 fused_ordering(987) 00:12:06.314 fused_ordering(988) 00:12:06.314 fused_ordering(989) 00:12:06.314 fused_ordering(990) 00:12:06.314 fused_ordering(991) 00:12:06.314 fused_ordering(992) 00:12:06.314 fused_ordering(993) 00:12:06.314 fused_ordering(994) 00:12:06.314 fused_ordering(995) 00:12:06.314 fused_ordering(996) 00:12:06.314 fused_ordering(997) 00:12:06.314 fused_ordering(998) 00:12:06.314 fused_ordering(999) 00:12:06.314 fused_ordering(1000) 00:12:06.314 fused_ordering(1001) 00:12:06.314 fused_ordering(1002) 00:12:06.314 fused_ordering(1003) 00:12:06.314 fused_ordering(1004) 00:12:06.314 fused_ordering(1005) 00:12:06.314 fused_ordering(1006) 00:12:06.314 fused_ordering(1007) 00:12:06.314 fused_ordering(1008) 00:12:06.314 fused_ordering(1009) 00:12:06.314 fused_ordering(1010) 00:12:06.314 fused_ordering(1011) 00:12:06.314 fused_ordering(1012) 00:12:06.314 fused_ordering(1013) 00:12:06.314 fused_ordering(1014) 00:12:06.314 fused_ordering(1015) 00:12:06.314 fused_ordering(1016) 00:12:06.314 fused_ordering(1017) 00:12:06.314 fused_ordering(1018) 00:12:06.314 fused_ordering(1019) 00:12:06.314 fused_ordering(1020) 00:12:06.314 fused_ordering(1021) 00:12:06.314 fused_ordering(1022) 00:12:06.314 fused_ordering(1023) 00:12:06.314 18:07:04 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:06.314 18:07:04 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:06.314 18:07:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:06.314 18:07:04 -- nvmf/common.sh@116 -- # sync 00:12:06.314 18:07:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:06.314 18:07:04 -- nvmf/common.sh@119 -- # set +e 00:12:06.314 18:07:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:06.314 18:07:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:06.314 rmmod nvme_tcp 00:12:06.314 rmmod nvme_fabrics 00:12:06.314 rmmod nvme_keyring 00:12:06.314 18:07:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:06.314 18:07:04 -- nvmf/common.sh@123 -- # set -e 00:12:06.314 18:07:04 -- nvmf/common.sh@124 -- # return 0 00:12:06.314 18:07:04 -- nvmf/common.sh@477 -- # '[' -n 68742 ']' 00:12:06.314 18:07:04 -- nvmf/common.sh@478 -- # killprocess 68742 00:12:06.314 18:07:04 -- common/autotest_common.sh@926 -- # '[' -z 68742 ']' 00:12:06.314 18:07:04 -- common/autotest_common.sh@930 -- # kill -0 68742 00:12:06.314 18:07:04 -- common/autotest_common.sh@931 -- # uname 00:12:06.314 18:07:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:06.314 18:07:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68742 00:12:06.314 18:07:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:06.314 18:07:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:06.314 killing process with pid 68742 00:12:06.314 18:07:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68742' 00:12:06.314 18:07:04 -- common/autotest_common.sh@945 -- # kill 68742 00:12:06.314 18:07:04 -- common/autotest_common.sh@950 -- # wait 68742 00:12:06.572 18:07:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:06.572 18:07:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:06.572 18:07:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:06.572 18:07:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.572 18:07:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:06.572 18:07:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.572 18:07:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.572 18:07:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.831 18:07:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:06.831 00:12:06.831 real 0m3.915s 00:12:06.831 user 0m4.675s 00:12:06.831 sys 0m1.282s 00:12:06.831 18:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.831 ************************************ 00:12:06.831 END TEST nvmf_fused_ordering 00:12:06.831 18:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 ************************************ 00:12:06.831 18:07:04 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:06.831 18:07:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:06.831 18:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.831 18:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 ************************************ 00:12:06.831 START TEST nvmf_delete_subsystem 00:12:06.831 ************************************ 00:12:06.831 18:07:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:06.831 * Looking for test storage... 00:12:06.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.831 18:07:04 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.831 18:07:04 -- nvmf/common.sh@7 -- # uname -s 00:12:06.831 18:07:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.831 18:07:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.831 18:07:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.831 18:07:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.831 18:07:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.831 18:07:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.831 18:07:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.831 18:07:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.831 18:07:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.831 18:07:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:06.831 18:07:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:06.831 18:07:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.831 18:07:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.831 18:07:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.831 18:07:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.831 18:07:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.831 18:07:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.831 18:07:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.831 18:07:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.831 18:07:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.831 18:07:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.831 18:07:04 -- paths/export.sh@5 -- # export PATH 00:12:06.831 18:07:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.831 18:07:04 -- nvmf/common.sh@46 -- # : 0 00:12:06.831 18:07:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:06.831 18:07:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:06.831 18:07:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:06.831 18:07:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.831 18:07:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.831 18:07:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:06.831 18:07:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:06.831 18:07:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:06.831 18:07:04 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:06.831 18:07:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:06.831 18:07:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.831 18:07:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:06.831 18:07:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:06.831 18:07:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:06.831 18:07:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.831 18:07:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.831 18:07:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.831 18:07:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:06.831 18:07:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:06.831 18:07:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.831 18:07:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.831 18:07:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:06.831 18:07:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:06.831 18:07:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.831 18:07:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.831 18:07:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.831 18:07:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.831 18:07:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.831 18:07:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.831 18:07:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.831 18:07:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.831 18:07:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:06.831 18:07:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:06.831 Cannot find device "nvmf_tgt_br" 00:12:06.831 18:07:04 -- nvmf/common.sh@154 -- # true 00:12:06.831 18:07:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.831 Cannot find device "nvmf_tgt_br2" 00:12:06.831 18:07:04 -- nvmf/common.sh@155 -- # true 00:12:06.831 18:07:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:06.831 18:07:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:06.831 Cannot find device "nvmf_tgt_br" 00:12:06.831 18:07:04 -- nvmf/common.sh@157 -- # true 00:12:06.831 18:07:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:06.831 Cannot find device "nvmf_tgt_br2" 00:12:06.831 18:07:04 -- nvmf/common.sh@158 -- # true 00:12:06.831 18:07:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:06.831 18:07:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:07.090 18:07:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.090 18:07:04 -- nvmf/common.sh@161 -- # true 00:12:07.090 18:07:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.090 18:07:04 -- nvmf/common.sh@162 -- # true 00:12:07.090 18:07:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.090 18:07:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.090 18:07:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.090 18:07:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.090 18:07:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.090 18:07:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.090 18:07:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.090 18:07:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:07.090 18:07:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:07.090 18:07:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:07.090 18:07:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:07.090 18:07:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:07.090 18:07:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:07.090 18:07:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.090 18:07:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.090 18:07:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.090 18:07:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:07.090 18:07:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:07.090 18:07:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.090 18:07:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.090 18:07:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.090 18:07:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.090 18:07:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.090 18:07:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:07.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:07.090 00:12:07.090 --- 10.0.0.2 ping statistics --- 00:12:07.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.090 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:07.090 18:07:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:07.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:07.090 00:12:07.090 --- 10.0.0.3 ping statistics --- 00:12:07.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.090 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:07.090 18:07:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:07.090 00:12:07.090 --- 10.0.0.1 ping statistics --- 00:12:07.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.090 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:07.090 18:07:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.090 18:07:04 -- nvmf/common.sh@421 -- # return 0 00:12:07.090 18:07:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:07.090 18:07:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.090 18:07:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:07.091 18:07:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:07.091 18:07:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.091 18:07:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:07.091 18:07:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:07.091 18:07:04 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:07.091 18:07:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:07.091 18:07:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:07.091 18:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:07.091 18:07:04 -- nvmf/common.sh@469 -- # nvmfpid=69002 00:12:07.091 18:07:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:07.091 18:07:04 -- nvmf/common.sh@470 -- # waitforlisten 69002 00:12:07.091 18:07:04 -- common/autotest_common.sh@819 -- # '[' -z 69002 ']' 00:12:07.091 18:07:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.091 18:07:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.091 18:07:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.091 18:07:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.091 18:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:07.349 [2024-04-25 18:07:05.057381] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:07.349 [2024-04-25 18:07:05.057483] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.349 [2024-04-25 18:07:05.200480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:07.607 [2024-04-25 18:07:05.322291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.607 [2024-04-25 18:07:05.322740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.607 [2024-04-25 18:07:05.322809] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.607 [2024-04-25 18:07:05.323054] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.607 [2024-04-25 18:07:05.323253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.607 [2024-04-25 18:07:05.323267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.174 18:07:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.174 18:07:05 -- common/autotest_common.sh@852 -- # return 0 00:12:08.174 18:07:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.174 18:07:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 18:07:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.174 18:07:05 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.174 18:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 [2024-04-25 18:07:05.970705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.174 18:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:05 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.174 18:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 18:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:05 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.174 18:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 [2024-04-25 18:07:05.986904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.174 18:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:05 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.174 18:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 NULL1 00:12:08.174 18:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:05 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:08.174 18:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 Delay0 00:12:08.174 18:07:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:06 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.174 18:07:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.174 18:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 18:07:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.174 18:07:06 -- target/delete_subsystem.sh@28 -- # perf_pid=69053 00:12:08.174 18:07:06 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:08.174 18:07:06 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:08.432 [2024-04-25 18:07:06.191385] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:10.335 18:07:08 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.335 18:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.335 18:07:08 -- common/autotest_common.sh@10 -- # set +x 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 starting I/O failed: -6 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 [2024-04-25 18:07:08.224991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2080 is same with the state(5) to be set 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.335 Read completed with error (sct=0, sc=8) 00:12:10.335 Write completed with error (sct=0, sc=8) 00:12:10.336 [2024-04-25 18:07:08.227952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584af0 is same with the state(5) to be set 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Write completed with error (sct=0, sc=8) 00:12:10.336 starting I/O failed: -6 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 Read completed with error (sct=0, sc=8) 00:12:10.336 [2024-04-25 18:07:08.228739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f50a0000c00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.228996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2b00 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.336 [2024-04-25 18:07:08.229361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.337 [2024-04-25 18:07:08.229369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.337 [2024-04-25 18:07:08.229377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.337 [2024-04-25 18:07:08.229385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.337 [2024-04-25 18:07:08.229394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3460 is same with the state(5) to be set 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 [2024-04-25 18:07:08.229612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f50a000c1d0 is same with the state(5) to be set 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 [2024-04-25 18:07:08.229791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f50a000bf20 is same with the state(5) to be set 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Read completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 Write completed with error (sct=0, sc=8) 00:12:10.337 [2024-04-25 18:07:08.229980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f50a000c480 is same with the state(5) to be set 00:12:11.713 [2024-04-25 18:07:09.204952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2f80 is same with the state(5) to be set 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 [2024-04-25 18:07:09.225936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584840 is same with the state(5) to be set 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Write completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 Read completed with error (sct=0, sc=8) 00:12:11.713 [2024-04-25 18:07:09.227414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a3c10 is same with the state(5) to be set 00:12:11.713 [2024-04-25 18:07:09.228400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2f80 (9): Bad file descriptor 00:12:11.713 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:11.713 18:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.713 18:07:09 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:11.713 18:07:09 -- target/delete_subsystem.sh@35 -- # kill -0 69053 00:12:11.713 18:07:09 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:11.713 Initializing NVMe Controllers 00:12:11.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.713 Controller IO queue size 128, less than required. 00:12:11.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:11.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:11.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:11.713 Initialization complete. Launching workers. 00:12:11.713 ======================================================== 00:12:11.713 Latency(us) 00:12:11.713 Device Information : IOPS MiB/s Average min max 00:12:11.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.01 0.08 905838.08 1079.79 1010830.69 00:12:11.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 139.17 0.07 921401.32 888.06 1015047.57 00:12:11.713 ======================================================== 00:12:11.713 Total : 304.18 0.15 912958.52 888.06 1015047.57 00:12:11.713 00:12:11.971 18:07:09 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@35 -- # kill -0 69053 00:12:11.972 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (69053) - No such process 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@45 -- # NOT wait 69053 00:12:11.972 18:07:09 -- common/autotest_common.sh@640 -- # local es=0 00:12:11.972 18:07:09 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 69053 00:12:11.972 18:07:09 -- common/autotest_common.sh@628 -- # local arg=wait 00:12:11.972 18:07:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:11.972 18:07:09 -- common/autotest_common.sh@632 -- # type -t wait 00:12:11.972 18:07:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:11.972 18:07:09 -- common/autotest_common.sh@643 -- # wait 69053 00:12:11.972 18:07:09 -- common/autotest_common.sh@643 -- # es=1 00:12:11.972 18:07:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:11.972 18:07:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:11.972 18:07:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:11.972 18:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.972 18:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:11.972 18:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.972 18:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.972 18:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:11.972 [2024-04-25 18:07:09.754754] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.972 18:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.972 18:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.972 18:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:11.972 18:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@54 -- # perf_pid=69101 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:11.972 18:07:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:12.229 [2024-04-25 18:07:09.934518] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:12.488 18:07:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:12.488 18:07:10 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:12.488 18:07:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.056 18:07:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.056 18:07:10 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:13.056 18:07:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.622 18:07:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.622 18:07:11 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:13.622 18:07:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.880 18:07:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.880 18:07:11 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:13.880 18:07:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.447 18:07:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.447 18:07:12 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:14.447 18:07:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.013 18:07:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.013 18:07:12 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:15.013 18:07:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.272 Initializing NVMe Controllers 00:12:15.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.272 Controller IO queue size 128, less than required. 00:12:15.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:15.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:15.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:15.272 Initialization complete. Launching workers. 00:12:15.272 ======================================================== 00:12:15.272 Latency(us) 00:12:15.272 Device Information : IOPS MiB/s Average min max 00:12:15.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003111.32 1000117.82 1007850.00 00:12:15.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004897.54 1000552.10 1011733.98 00:12:15.272 ======================================================== 00:12:15.272 Total : 256.00 0.12 1004004.43 1000117.82 1011733.98 00:12:15.272 00:12:15.531 18:07:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.531 18:07:13 -- target/delete_subsystem.sh@57 -- # kill -0 69101 00:12:15.531 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (69101) - No such process 00:12:15.531 18:07:13 -- target/delete_subsystem.sh@67 -- # wait 69101 00:12:15.531 18:07:13 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:15.531 18:07:13 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:15.531 18:07:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:15.531 18:07:13 -- nvmf/common.sh@116 -- # sync 00:12:15.531 18:07:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:15.531 18:07:13 -- nvmf/common.sh@119 -- # set +e 00:12:15.531 18:07:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:15.531 18:07:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:15.531 rmmod nvme_tcp 00:12:15.531 rmmod nvme_fabrics 00:12:15.531 rmmod nvme_keyring 00:12:15.531 18:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:15.531 18:07:13 -- nvmf/common.sh@123 -- # set -e 00:12:15.531 18:07:13 -- nvmf/common.sh@124 -- # return 0 00:12:15.531 18:07:13 -- nvmf/common.sh@477 -- # '[' -n 69002 ']' 00:12:15.531 18:07:13 -- nvmf/common.sh@478 -- # killprocess 69002 00:12:15.531 18:07:13 -- common/autotest_common.sh@926 -- # '[' -z 69002 ']' 00:12:15.531 18:07:13 -- common/autotest_common.sh@930 -- # kill -0 69002 00:12:15.531 18:07:13 -- common/autotest_common.sh@931 -- # uname 00:12:15.531 18:07:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:15.531 18:07:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69002 00:12:15.531 18:07:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:15.531 18:07:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:15.531 killing process with pid 69002 00:12:15.531 18:07:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69002' 00:12:15.531 18:07:13 -- common/autotest_common.sh@945 -- # kill 69002 00:12:15.531 18:07:13 -- common/autotest_common.sh@950 -- # wait 69002 00:12:15.789 18:07:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:15.789 18:07:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:15.789 18:07:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:15.789 18:07:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.789 18:07:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:15.789 18:07:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.789 18:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.789 18:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.789 18:07:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:15.789 00:12:15.789 real 0m9.154s 00:12:15.789 user 0m27.346s 00:12:15.789 sys 0m1.475s 00:12:15.789 18:07:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.789 18:07:13 -- common/autotest_common.sh@10 -- # set +x 00:12:15.789 ************************************ 00:12:15.789 END TEST nvmf_delete_subsystem 00:12:15.789 ************************************ 00:12:16.048 18:07:13 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:12:16.048 18:07:13 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:12:16.048 18:07:13 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:16.048 18:07:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:16.048 18:07:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.048 18:07:13 -- common/autotest_common.sh@10 -- # set +x 00:12:16.048 ************************************ 00:12:16.048 START TEST nvmf_vfio_user 00:12:16.048 ************************************ 00:12:16.048 18:07:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:16.048 * Looking for test storage... 00:12:16.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.048 18:07:13 -- nvmf/common.sh@7 -- # uname -s 00:12:16.048 18:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.048 18:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.048 18:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.048 18:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.048 18:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.048 18:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.048 18:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.048 18:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.048 18:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.048 18:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.048 18:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:16.048 18:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:12:16.048 18:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.048 18:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.048 18:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.048 18:07:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.048 18:07:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.048 18:07:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.048 18:07:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.048 18:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.048 18:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.048 18:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.048 18:07:13 -- paths/export.sh@5 -- # export PATH 00:12:16.048 18:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.048 18:07:13 -- nvmf/common.sh@46 -- # : 0 00:12:16.048 18:07:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:16.048 18:07:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:16.048 18:07:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:16.048 18:07:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.048 18:07:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.048 18:07:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:16.048 18:07:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:16.048 18:07:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69223 00:12:16.048 Process pid: 69223 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69223' 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69223 00:12:16.048 18:07:13 -- common/autotest_common.sh@819 -- # '[' -z 69223 ']' 00:12:16.048 18:07:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.048 18:07:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:16.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.048 18:07:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.048 18:07:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:16.048 18:07:13 -- common/autotest_common.sh@10 -- # set +x 00:12:16.048 18:07:13 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:16.048 [2024-04-25 18:07:13.922549] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:16.049 [2024-04-25 18:07:13.922634] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.308 [2024-04-25 18:07:14.053081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.308 [2024-04-25 18:07:14.167043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:16.308 [2024-04-25 18:07:14.167426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.308 [2024-04-25 18:07:14.167531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.308 [2024-04-25 18:07:14.167599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.308 [2024-04-25 18:07:14.167815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.308 [2024-04-25 18:07:14.168210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.308 [2024-04-25 18:07:14.168382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.308 [2024-04-25 18:07:14.168403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.242 18:07:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:17.242 18:07:14 -- common/autotest_common.sh@852 -- # return 0 00:12:17.242 18:07:14 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:18.176 18:07:15 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:18.176 18:07:16 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:18.176 18:07:16 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:18.176 18:07:16 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.176 18:07:16 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:18.176 18:07:16 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:18.744 Malloc1 00:12:18.744 18:07:16 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:18.744 18:07:16 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:19.002 18:07:16 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:19.260 18:07:17 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:19.260 18:07:17 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:19.260 18:07:17 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:19.518 Malloc2 00:12:19.518 18:07:17 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:19.776 18:07:17 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:20.035 18:07:17 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:20.296 18:07:17 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:20.296 [2024-04-25 18:07:18.003686] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:20.296 [2024-04-25 18:07:18.003760] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69359 ] 00:12:20.296 [2024-04-25 18:07:18.142893] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:20.296 [2024-04-25 18:07:18.149831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:20.296 [2024-04-25 18:07:18.149912] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6457ef2000 00:12:20.296 [2024-04-25 18:07:18.150811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.151791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.152824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.153812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.154833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.155822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.156835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.157839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.296 [2024-04-25 18:07:18.158846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:20.296 [2024-04-25 18:07:18.158890] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6457543000 00:12:20.296 [2024-04-25 18:07:18.160019] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:20.296 [2024-04-25 18:07:18.176371] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:20.296 [2024-04-25 18:07:18.176446] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:20.296 [2024-04-25 18:07:18.178971] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:20.296 [2024-04-25 18:07:18.179042] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:20.296 [2024-04-25 18:07:18.179117] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:20.297 [2024-04-25 18:07:18.179138] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:20.297 [2024-04-25 18:07:18.179144] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:20.297 [2024-04-25 18:07:18.179958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:20.297 [2024-04-25 18:07:18.180004] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:20.297 [2024-04-25 18:07:18.180015] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:20.297 [2024-04-25 18:07:18.180979] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:20.297 [2024-04-25 18:07:18.181023] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:20.297 [2024-04-25 18:07:18.181035] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.181969] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:20.297 [2024-04-25 18:07:18.182006] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.182968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:20.297 [2024-04-25 18:07:18.182988] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:20.297 [2024-04-25 18:07:18.183010] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.183019] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.183124] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:20.297 [2024-04-25 18:07:18.183129] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.183135] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:20.297 [2024-04-25 18:07:18.183978] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:20.297 [2024-04-25 18:07:18.184991] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:20.297 [2024-04-25 18:07:18.185980] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:20.297 [2024-04-25 18:07:18.187035] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:20.297 [2024-04-25 18:07:18.191285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:20.297 [2024-04-25 18:07:18.191323] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:20.297 [2024-04-25 18:07:18.191329] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191350] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:20.297 [2024-04-25 18:07:18.191361] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191380] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.297 [2024-04-25 18:07:18.191386] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.297 [2024-04-25 18:07:18.191403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191506] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:20.297 [2024-04-25 18:07:18.191515] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:20.297 [2024-04-25 18:07:18.191519] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:20.297 [2024-04-25 18:07:18.191523] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:20.297 [2024-04-25 18:07:18.191528] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:20.297 [2024-04-25 18:07:18.191533] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:20.297 [2024-04-25 18:07:18.191537] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191550] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.297 [2024-04-25 18:07:18.191614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.297 [2024-04-25 18:07:18.191622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.297 [2024-04-25 18:07:18.191630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.297 [2024-04-25 18:07:18.191635] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191647] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191672] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:20.297 [2024-04-25 18:07:18.191677] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191685] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191694] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191763] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191773] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191781] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:20.297 [2024-04-25 18:07:18.191786] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:20.297 [2024-04-25 18:07:18.191792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191824] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:20.297 [2024-04-25 18:07:18.191836] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191846] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191854] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.297 [2024-04-25 18:07:18.191858] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.297 [2024-04-25 18:07:18.191865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.297 [2024-04-25 18:07:18.191889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:20.297 [2024-04-25 18:07:18.191904] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191914] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:20.297 [2024-04-25 18:07:18.191921] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.297 [2024-04-25 18:07:18.191925] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.297 [2024-04-25 18:07:18.191931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.191946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:20.298 [2024-04-25 18:07:18.191967] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.191974] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.191983] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.191989] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.191994] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.191999] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:20.298 [2024-04-25 18:07:18.192004] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:20.298 [2024-04-25 18:07:18.192009] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:20.298 [2024-04-25 18:07:18.192027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:20.298 [2024-04-25 18:07:18.192053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:20.298 [2024-04-25 18:07:18.192092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:20.298 [2024-04-25 18:07:18.192114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:20.298 [2024-04-25 18:07:18.192133] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:20.298 [2024-04-25 18:07:18.192138] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:20.298 [2024-04-25 18:07:18.192142] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:20.298 [2024-04-25 18:07:18.192145] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:20.298 [2024-04-25 18:07:18.192151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:20.298 [2024-04-25 18:07:18.192159] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:20.298 [2024-04-25 18:07:18.192163] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:20.298 [2024-04-25 18:07:18.192169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192176] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:20.298 ===================================================== 00:12:20.298 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:20.298 ===================================================== 00:12:20.298 Controller Capabilities/Features 00:12:20.298 ================================ 00:12:20.298 Vendor ID: 4e58 00:12:20.298 Subsystem Vendor ID: 4e58 00:12:20.298 Serial Number: SPDK1 00:12:20.298 Model Number: SPDK bdev Controller 00:12:20.298 Firmware Version: 24.01.1 00:12:20.298 Recommended Arb Burst: 6 00:12:20.298 IEEE OUI Identifier: 8d 6b 50 00:12:20.298 Multi-path I/O 00:12:20.298 May have multiple subsystem ports: Yes 00:12:20.298 May have multiple controllers: Yes 00:12:20.298 Associated with SR-IOV VF: No 00:12:20.298 Max Data Transfer Size: 131072 00:12:20.298 Max Number of Namespaces: 32 00:12:20.298 Max Number of I/O Queues: 127 00:12:20.298 NVMe Specification Version (VS): 1.3 00:12:20.298 NVMe Specification Version (Identify): 1.3 00:12:20.298 Maximum Queue Entries: 256 00:12:20.298 Contiguous Queues Required: Yes 00:12:20.298 Arbitration Mechanisms Supported 00:12:20.298 Weighted Round Robin: Not Supported 00:12:20.298 Vendor Specific: Not Supported 00:12:20.298 Reset Timeout: 15000 ms 00:12:20.298 Doorbell Stride: 4 bytes 00:12:20.298 NVM Subsystem Reset: Not Supported 00:12:20.298 Command Sets Supported 00:12:20.298 NVM Command Set: Supported 00:12:20.298 Boot Partition: Not Supported 00:12:20.298 Memory Page Size Minimum: 4096 bytes 00:12:20.298 Memory Page Size Maximum: 4096 bytes 00:12:20.298 Persistent Memory Region: Not Supported 00:12:20.298 Optional Asynchronous Events Supported 00:12:20.298 Namespace Attribute Notices: Supported 00:12:20.298 Firmware Activation Notices: Not Supported 00:12:20.298 ANA Change Notices: Not Supported 00:12:20.298 PLE Aggregate Log Change Notices: Not Supported 00:12:20.298 LBA Status Info Alert Notices: Not Supported 00:12:20.298 EGE Aggregate Log Change Notices: Not Supported 00:12:20.298 Normal NVM Subsystem Shutdown event: Not Supported 00:12:20.298 Zone Descriptor Change Notices: Not Supported 00:12:20.298 Discovery Log Change Notices: Not Supported 00:12:20.298 Controller Attributes 00:12:20.298 128-bit Host Identifier: Supported 00:12:20.298 Non-Operational Permissive Mode: Not Supported 00:12:20.298 NVM Sets: Not Supported 00:12:20.298 Read Recovery Levels: Not Supported 00:12:20.298 Endurance Groups: Not Supported 00:12:20.298 Predictable Latency Mode: Not Supported 00:12:20.298 Traffic Based Keep ALive: Not Supported 00:12:20.298 Namespace Granularity: Not Supported 00:12:20.298 SQ Associations: Not Supported 00:12:20.298 UUID List: Not Supported 00:12:20.298 Multi-Domain Subsystem: Not Supported 00:12:20.298 Fixed Capacity Management: Not Supported 00:12:20.298 Variable Capacity Management: Not Supported 00:12:20.298 Delete Endurance Group: Not Supported 00:12:20.298 Delete NVM Set: Not Supported 00:12:20.298 Extended LBA Formats Supported: Not Supported 00:12:20.298 Flexible Data Placement Supported: Not Supported 00:12:20.298 00:12:20.298 Controller Memory Buffer Support 00:12:20.298 ================================ 00:12:20.298 Supported: No 00:12:20.298 00:12:20.298 Persistent Memory Region Support 00:12:20.298 ================================ 00:12:20.298 Supported: No 00:12:20.298 00:12:20.298 Admin Command Set Attributes 00:12:20.298 ============================ 00:12:20.298 Security Send/Receive: Not Supported 00:12:20.298 Format NVM: Not Supported 00:12:20.298 Firmware Activate/Download: Not Supported 00:12:20.298 Namespace Management: Not Supported 00:12:20.298 Device Self-Test: Not Supported 00:12:20.298 Directives: Not Supported 00:12:20.298 NVMe-MI: Not Supported 00:12:20.298 Virtualization Management: Not Supported 00:12:20.298 Doorbell Buffer Config: Not Supported 00:12:20.298 Get LBA Status Capability: Not Supported 00:12:20.298 Command & Feature Lockdown Capability: Not Supported 00:12:20.298 Abort Command Limit: 4 00:12:20.298 Async Event Request Limit: 4 00:12:20.298 Number of Firmware Slots: N/A 00:12:20.298 Firmware Slot 1 Read-Only: N/A 00:12:20.298 Firmware Activation Wit[2024-04-25 18:07:18.192180] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.298 [2024-04-25 18:07:18.192187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192194] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:20.298 [2024-04-25 18:07:18.192198] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:20.298 [2024-04-25 18:07:18.192204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:20.298 [2024-04-25 18:07:18.192211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:20.299 hout Reset: N/A 00:12:20.299 Multiple Update Detection Support: N/A 00:12:20.299 Firmware Update Granularity: No Information Provided 00:12:20.299 Per-Namespace SMART Log: No 00:12:20.299 Asymmetric Namespace Access Log Page: Not Supported 00:12:20.299 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:20.299 Command Effects Log Page: Supported 00:12:20.299 Get Log Page Extended Data: Supported 00:12:20.299 Telemetry Log Pages: Not Supported 00:12:20.299 Persistent Event Log Pages: Not Supported 00:12:20.299 Supported Log Pages Log Page: May Support 00:12:20.299 Commands Supported & Effects Log Page: Not Supported 00:12:20.299 Feature Identifiers & Effects Log Page:May Support 00:12:20.299 NVMe-MI Commands & Effects Log Page: May Support 00:12:20.299 Data Area 4 for Telemetry Log: Not Supported 00:12:20.299 Error Log Page Entries Supported: 128 00:12:20.299 Keep Alive: Supported 00:12:20.299 Keep Alive Granularity: 10000 ms 00:12:20.299 00:12:20.299 NVM Command Set Attributes 00:12:20.299 ========================== 00:12:20.299 Submission Queue Entry Size 00:12:20.299 Max: 64 00:12:20.299 Min: 64 00:12:20.299 Completion Queue Entry Size 00:12:20.299 Max: 16 00:12:20.299 Min: 16 00:12:20.299 Number of Namespaces: 32 00:12:20.299 Compare Command: Supported 00:12:20.299 Write Uncorrectable Command: Not Supported 00:12:20.299 Dataset Management Command: Supported 00:12:20.299 Write Zeroes Command: Supported 00:12:20.299 Set Features Save Field: Not Supported 00:12:20.299 Reservations: Not Supported 00:12:20.299 Timestamp: Not Supported 00:12:20.299 Copy: Supported 00:12:20.299 Volatile Write Cache: Present 00:12:20.299 Atomic Write Unit (Normal): 1 00:12:20.299 Atomic Write Unit (PFail): 1 00:12:20.299 Atomic Compare & Write Unit: 1 00:12:20.299 Fused Compare & Write: Supported 00:12:20.299 Scatter-Gather List 00:12:20.299 SGL Command Set: Supported (Dword aligned) 00:12:20.299 SGL Keyed: Not Supported 00:12:20.299 SGL Bit Bucket Descriptor: Not Supported 00:12:20.299 SGL Metadata Pointer: Not Supported 00:12:20.299 Oversized SGL: Not Supported 00:12:20.299 SGL Metadata Address: Not Supported 00:12:20.299 SGL Offset: Not Supported 00:12:20.299 Transport SGL Data Block: Not Supported 00:12:20.299 Replay Protected Memory Block: Not Supported 00:12:20.299 00:12:20.299 Firmware Slot Information 00:12:20.299 ========================= 00:12:20.299 Active slot: 1 00:12:20.299 Slot 1 Firmware Revision: 24.01.1 00:12:20.299 00:12:20.299 00:12:20.299 Commands Supported and Effects 00:12:20.299 ============================== 00:12:20.299 Admin Commands 00:12:20.299 -------------- 00:12:20.299 Get Log Page (02h): Supported 00:12:20.299 Identify (06h): Supported 00:12:20.299 Abort (08h): Supported 00:12:20.299 Set Features (09h): Supported 00:12:20.299 Get Features (0Ah): Supported 00:12:20.299 Asynchronous Event Request (0Ch): Supported 00:12:20.299 Keep Alive (18h): Supported 00:12:20.299 I/O Commands 00:12:20.299 ------------ 00:12:20.299 Flush (00h): Supported LBA-Change 00:12:20.299 Write (01h): Supported LBA-Change 00:12:20.299 Read (02h): Supported 00:12:20.299 Compare (05h): Supported 00:12:20.299 Write Zeroes (08h): Supported LBA-Change 00:12:20.299 Dataset Management (09h): Supported LBA-Change 00:12:20.299 Copy (19h): Supported LBA-Change 00:12:20.299 Unknown (79h): Supported LBA-Change 00:12:20.299 Unknown (7Ah): Supported 00:12:20.299 00:12:20.299 Error Log 00:12:20.299 ========= 00:12:20.299 00:12:20.299 Arbitration 00:12:20.299 =========== 00:12:20.299 Arbitration Burst: 1 00:12:20.299 00:12:20.299 Power Management 00:12:20.299 ================ 00:12:20.299 Number of Power States: 1 00:12:20.299 Current Power State: Power State #0 00:12:20.299 Power State #0: 00:12:20.299 Max Power: 0.00 W 00:12:20.299 Non-Operational State: Operational 00:12:20.299 Entry Latency: Not Reported 00:12:20.299 Exit Latency: Not Reported 00:12:20.299 Relative Read Throughput: 0 00:12:20.299 Relative Read Latency: 0 00:12:20.299 Relative Write Throughput: 0 00:12:20.299 Relative Write Latency: 0 00:12:20.299 Idle Power: Not Reported 00:12:20.299 Active Power: Not Reported 00:12:20.299 Non-Operational Permissive Mode: Not Supported 00:12:20.299 00:12:20.299 Health Information 00:12:20.299 ================== 00:12:20.299 Critical Warnings: 00:12:20.299 Available Spare Space: OK 00:12:20.299 Temperature: OK 00:12:20.299 Device Reliability: OK 00:12:20.299 Read Only: No 00:12:20.299 Volatile Memory Backup: OK 00:12:20.299 Current Temperature: 0 Kelvin[2024-04-25 18:07:18.192373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:20.299 [2024-04-25 18:07:18.192388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192428] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:20.299 [2024-04-25 18:07:18.192444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.192463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.299 [2024-04-25 18:07:18.193048] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:20.299 [2024-04-25 18:07:18.193077] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:20.299 [2024-04-25 18:07:18.194090] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:20.299 [2024-04-25 18:07:18.194107] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:20.299 [2024-04-25 18:07:18.195057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:20.299 [2024-04-25 18:07:18.195098] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:20.299 [2024-04-25 18:07:18.195189] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:20.299 [2024-04-25 18:07:18.197124] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:20.558 (-273 Celsius) 00:12:20.558 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:20.558 Available Spare: 0% 00:12:20.558 Available Spare Threshold: 0% 00:12:20.558 Life Percentage Used: 0% 00:12:20.558 Data Units Read: 0 00:12:20.558 Data Units Written: 0 00:12:20.558 Host Read Commands: 0 00:12:20.558 Host Write Commands: 0 00:12:20.558 Controller Busy Time: 0 minutes 00:12:20.558 Power Cycles: 0 00:12:20.558 Power On Hours: 0 hours 00:12:20.558 Unsafe Shutdowns: 0 00:12:20.558 Unrecoverable Media Errors: 0 00:12:20.558 Lifetime Error Log Entries: 0 00:12:20.558 Warning Temperature Time: 0 minutes 00:12:20.558 Critical Temperature Time: 0 minutes 00:12:20.558 00:12:20.558 Number of Queues 00:12:20.558 ================ 00:12:20.558 Number of I/O Submission Queues: 127 00:12:20.558 Number of I/O Completion Queues: 127 00:12:20.558 00:12:20.558 Active Namespaces 00:12:20.558 ================= 00:12:20.558 Namespace ID:1 00:12:20.558 Error Recovery Timeout: Unlimited 00:12:20.558 Command Set Identifier: NVM (00h) 00:12:20.558 Deallocate: Supported 00:12:20.558 Deallocated/Unwritten Error: Not Supported 00:12:20.558 Deallocated Read Value: Unknown 00:12:20.558 Deallocate in Write Zeroes: Not Supported 00:12:20.558 Deallocated Guard Field: 0xFFFF 00:12:20.558 Flush: Supported 00:12:20.558 Reservation: Supported 00:12:20.558 Namespace Sharing Capabilities: Multiple Controllers 00:12:20.558 Size (in LBAs): 131072 (0GiB) 00:12:20.558 Capacity (in LBAs): 131072 (0GiB) 00:12:20.558 Utilization (in LBAs): 131072 (0GiB) 00:12:20.558 NGUID: A112291EFD2B49CC99F9437755D235CC 00:12:20.558 UUID: a112291e-fd2b-49cc-99f9-437755d235cc 00:12:20.558 Thin Provisioning: Not Supported 00:12:20.558 Per-NS Atomic Units: Yes 00:12:20.558 Atomic Boundary Size (Normal): 0 00:12:20.558 Atomic Boundary Size (PFail): 0 00:12:20.558 Atomic Boundary Offset: 0 00:12:20.558 Maximum Single Source Range Length: 65535 00:12:20.558 Maximum Copy Length: 65535 00:12:20.558 Maximum Source Range Count: 1 00:12:20.558 NGUID/EUI64 Never Reused: No 00:12:20.558 Namespace Write Protected: No 00:12:20.558 Number of LBA Formats: 1 00:12:20.558 Current LBA Format: LBA Format #00 00:12:20.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.558 00:12:20.558 18:07:18 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:25.870 Initializing NVMe Controllers 00:12:25.870 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.870 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:25.870 Initialization complete. Launching workers. 00:12:25.870 ======================================================== 00:12:25.870 Latency(us) 00:12:25.870 Device Information : IOPS MiB/s Average min max 00:12:25.870 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36488.10 142.53 3507.55 1064.79 10346.31 00:12:25.870 ======================================================== 00:12:25.870 Total : 36488.10 142.53 3507.55 1064.79 10346.31 00:12:25.870 00:12:25.870 18:07:23 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:31.132 Initializing NVMe Controllers 00:12:31.132 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:31.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:31.132 Initialization complete. Launching workers. 00:12:31.132 ======================================================== 00:12:31.132 Latency(us) 00:12:31.132 Device Information : IOPS MiB/s Average min max 00:12:31.132 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15954.33 62.32 8028.03 6491.18 15004.62 00:12:31.132 ======================================================== 00:12:31.132 Total : 15954.33 62.32 8028.03 6491.18 15004.62 00:12:31.132 00:12:31.132 18:07:28 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.406 Initializing NVMe Controllers 00:12:36.406 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.406 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:36.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:36.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:36.406 Initialization complete. Launching workers. 00:12:36.406 Starting thread on core 2 00:12:36.406 Starting thread on core 3 00:12:36.406 Starting thread on core 1 00:12:36.406 18:07:34 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:40.605 Initializing NVMe Controllers 00:12:40.606 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.606 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.606 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:40.606 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:40.606 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:40.606 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:40.606 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:40.606 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:40.606 Initialization complete. Launching workers. 00:12:40.606 Starting thread on core 1 with urgent priority queue 00:12:40.606 Starting thread on core 2 with urgent priority queue 00:12:40.606 Starting thread on core 3 with urgent priority queue 00:12:40.606 Starting thread on core 0 with urgent priority queue 00:12:40.606 SPDK bdev Controller (SPDK1 ) core 0: 4257.67 IO/s 23.49 secs/100000 ios 00:12:40.606 SPDK bdev Controller (SPDK1 ) core 1: 4194.00 IO/s 23.84 secs/100000 ios 00:12:40.606 SPDK bdev Controller (SPDK1 ) core 2: 3016.00 IO/s 33.16 secs/100000 ios 00:12:40.606 SPDK bdev Controller (SPDK1 ) core 3: 3146.67 IO/s 31.78 secs/100000 ios 00:12:40.606 ======================================================== 00:12:40.606 00:12:40.606 18:07:37 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:40.606 Initializing NVMe Controllers 00:12:40.606 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.606 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.606 Namespace ID: 1 size: 0GB 00:12:40.606 Initialization complete. 00:12:40.606 INFO: using host memory buffer for IO 00:12:40.606 Hello world! 00:12:40.606 18:07:38 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:41.541 Initializing NVMe Controllers 00:12:41.541 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.541 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.541 Initialization complete. Launching workers. 00:12:41.541 submit (in ns) avg, min, max = 6097.2, 3539.1, 6002320.9 00:12:41.541 complete (in ns) avg, min, max = 32546.4, 1986.4, 6103863.6 00:12:41.541 00:12:41.541 Submit histogram 00:12:41.541 ================ 00:12:41.541 Range in us Cumulative Count 00:12:41.541 3.535 - 3.549: 0.0083% ( 1) 00:12:41.541 3.607 - 3.622: 0.0333% ( 3) 00:12:41.541 3.622 - 3.636: 0.1000% ( 8) 00:12:41.541 3.636 - 3.651: 0.2249% ( 15) 00:12:41.541 3.651 - 3.665: 0.3999% ( 21) 00:12:41.541 3.665 - 3.680: 0.5498% ( 18) 00:12:41.541 3.680 - 3.695: 0.7248% ( 21) 00:12:41.541 3.695 - 3.709: 1.0497% ( 39) 00:12:41.541 3.709 - 3.724: 1.5578% ( 61) 00:12:41.541 3.724 - 3.753: 2.9990% ( 173) 00:12:41.541 3.753 - 3.782: 5.0900% ( 251) 00:12:41.542 3.782 - 3.811: 9.0387% ( 474) 00:12:41.542 3.811 - 3.840: 15.8031% ( 812) 00:12:41.542 3.840 - 3.869: 22.0177% ( 746) 00:12:41.542 3.869 - 3.898: 30.1066% ( 971) 00:12:41.542 3.898 - 3.927: 40.7614% ( 1279) 00:12:41.542 3.927 - 3.956: 48.4922% ( 928) 00:12:41.542 3.956 - 3.985: 55.4648% ( 837) 00:12:41.542 3.985 - 4.015: 62.7374% ( 873) 00:12:41.542 4.015 - 4.044: 68.0357% ( 636) 00:12:41.542 4.044 - 4.073: 70.8847% ( 342) 00:12:41.542 4.073 - 4.102: 73.5088% ( 315) 00:12:41.542 4.102 - 4.131: 75.4499% ( 233) 00:12:41.542 4.131 - 4.160: 77.4242% ( 237) 00:12:41.542 4.160 - 4.189: 79.3152% ( 227) 00:12:41.542 4.189 - 4.218: 82.3309% ( 362) 00:12:41.542 4.218 - 4.247: 84.8800% ( 306) 00:12:41.542 4.247 - 4.276: 87.1709% ( 275) 00:12:41.542 4.276 - 4.305: 89.5285% ( 283) 00:12:41.542 4.305 - 4.335: 91.6361% ( 253) 00:12:41.542 4.335 - 4.364: 92.9690% ( 160) 00:12:41.542 4.364 - 4.393: 93.9853% ( 122) 00:12:41.542 4.393 - 4.422: 94.7517% ( 92) 00:12:41.542 4.422 - 4.451: 95.1599% ( 49) 00:12:41.542 4.451 - 4.480: 95.4682% ( 37) 00:12:41.542 4.480 - 4.509: 95.7847% ( 38) 00:12:41.542 4.509 - 4.538: 95.9430% ( 19) 00:12:41.542 4.538 - 4.567: 96.0513% ( 13) 00:12:41.542 4.567 - 4.596: 96.2596% ( 25) 00:12:41.542 4.596 - 4.625: 96.4762% ( 26) 00:12:41.542 4.625 - 4.655: 96.7511% ( 33) 00:12:41.542 4.655 - 4.684: 96.9010% ( 18) 00:12:41.542 4.684 - 4.713: 97.0427% ( 17) 00:12:41.542 4.713 - 4.742: 97.1676% ( 15) 00:12:41.542 4.742 - 4.771: 97.2592% ( 11) 00:12:41.542 4.771 - 4.800: 97.3009% ( 5) 00:12:41.542 4.800 - 4.829: 97.3592% ( 7) 00:12:41.542 4.829 - 4.858: 97.4009% ( 5) 00:12:41.542 4.858 - 4.887: 97.5175% ( 14) 00:12:41.542 4.887 - 4.916: 97.5758% ( 7) 00:12:41.542 4.916 - 4.945: 97.6508% ( 9) 00:12:41.542 4.945 - 4.975: 97.6841% ( 4) 00:12:41.542 4.975 - 5.004: 97.7091% ( 3) 00:12:41.542 5.004 - 5.033: 97.7341% ( 3) 00:12:41.542 5.033 - 5.062: 97.7424% ( 1) 00:12:41.542 5.062 - 5.091: 97.7591% ( 2) 00:12:41.542 5.091 - 5.120: 97.7924% ( 4) 00:12:41.542 5.120 - 5.149: 97.8257% ( 4) 00:12:41.542 5.149 - 5.178: 97.8590% ( 4) 00:12:41.542 5.178 - 5.207: 97.8674% ( 1) 00:12:41.542 5.236 - 5.265: 97.9090% ( 5) 00:12:41.542 5.265 - 5.295: 97.9757% ( 8) 00:12:41.542 5.295 - 5.324: 97.9923% ( 2) 00:12:41.542 5.324 - 5.353: 98.0257% ( 4) 00:12:41.542 5.353 - 5.382: 98.0340% ( 1) 00:12:41.542 5.382 - 5.411: 98.0423% ( 1) 00:12:41.542 5.411 - 5.440: 98.0506% ( 1) 00:12:41.542 5.469 - 5.498: 98.0590% ( 1) 00:12:41.542 5.527 - 5.556: 98.0673% ( 1) 00:12:41.542 5.556 - 5.585: 98.0756% ( 1) 00:12:41.542 5.585 - 5.615: 98.0923% ( 2) 00:12:41.542 5.673 - 5.702: 98.1090% ( 2) 00:12:41.542 6.051 - 6.080: 98.1173% ( 1) 00:12:41.542 6.109 - 6.138: 98.1256% ( 1) 00:12:41.542 6.138 - 6.167: 98.1340% ( 1) 00:12:41.542 6.371 - 6.400: 98.1423% ( 1) 00:12:41.542 6.836 - 6.865: 98.1506% ( 1) 00:12:41.542 7.156 - 7.185: 98.1589% ( 1) 00:12:41.542 8.378 - 8.436: 98.1756% ( 2) 00:12:41.542 8.553 - 8.611: 98.1923% ( 2) 00:12:41.542 8.611 - 8.669: 98.2006% ( 1) 00:12:41.542 8.727 - 8.785: 98.2089% ( 1) 00:12:41.542 8.785 - 8.844: 98.2339% ( 3) 00:12:41.542 8.844 - 8.902: 98.2423% ( 1) 00:12:41.542 8.902 - 8.960: 98.2506% ( 1) 00:12:41.542 9.018 - 9.076: 98.2672% ( 2) 00:12:41.542 9.076 - 9.135: 98.2756% ( 1) 00:12:41.542 9.367 - 9.425: 98.2839% ( 1) 00:12:41.542 9.425 - 9.484: 98.2922% ( 1) 00:12:41.542 9.484 - 9.542: 98.3422% ( 6) 00:12:41.542 9.716 - 9.775: 98.3505% ( 1) 00:12:41.542 9.833 - 9.891: 98.3589% ( 1) 00:12:41.542 10.298 - 10.356: 98.3755% ( 2) 00:12:41.542 10.473 - 10.531: 98.3922% ( 2) 00:12:41.542 11.113 - 11.171: 98.4005% ( 1) 00:12:41.542 11.345 - 11.404: 98.4089% ( 1) 00:12:41.542 12.451 - 12.509: 98.4172% ( 1) 00:12:41.542 12.509 - 12.567: 98.4255% ( 1) 00:12:41.542 12.567 - 12.625: 98.4339% ( 1) 00:12:41.542 12.684 - 12.742: 98.4422% ( 1) 00:12:41.542 12.975 - 13.033: 98.4505% ( 1) 00:12:41.542 13.265 - 13.324: 98.4588% ( 1) 00:12:41.542 13.324 - 13.382: 98.4755% ( 2) 00:12:41.542 13.498 - 13.556: 98.4922% ( 2) 00:12:41.542 13.615 - 13.673: 98.5005% ( 1) 00:12:41.542 13.789 - 13.847: 98.5088% ( 1) 00:12:41.542 13.847 - 13.905: 98.5172% ( 1) 00:12:41.542 14.022 - 14.080: 98.5338% ( 2) 00:12:41.542 14.196 - 14.255: 98.5422% ( 1) 00:12:41.542 14.255 - 14.313: 98.5588% ( 2) 00:12:41.542 14.313 - 14.371: 98.5838% ( 3) 00:12:41.542 14.371 - 14.429: 98.5921% ( 1) 00:12:41.542 14.429 - 14.487: 98.6005% ( 1) 00:12:41.542 14.720 - 14.778: 98.6171% ( 2) 00:12:41.542 14.778 - 14.836: 98.6338% ( 2) 00:12:41.542 14.836 - 14.895: 98.6504% ( 2) 00:12:41.542 14.895 - 15.011: 98.6588% ( 1) 00:12:41.542 15.011 - 15.127: 98.6754% ( 2) 00:12:41.542 15.127 - 15.244: 98.7088% ( 4) 00:12:41.542 15.244 - 15.360: 98.7921% ( 10) 00:12:41.542 15.360 - 15.476: 98.8087% ( 2) 00:12:41.542 15.476 - 15.593: 98.8171% ( 1) 00:12:41.542 15.593 - 15.709: 98.8254% ( 1) 00:12:41.542 15.709 - 15.825: 98.8421% ( 2) 00:12:41.542 15.825 - 15.942: 98.8837% ( 5) 00:12:41.542 15.942 - 16.058: 98.8920% ( 1) 00:12:41.542 16.058 - 16.175: 98.9254% ( 4) 00:12:41.542 16.175 - 16.291: 98.9503% ( 3) 00:12:41.542 16.524 - 16.640: 98.9587% ( 1) 00:12:41.542 16.640 - 16.756: 98.9753% ( 2) 00:12:41.542 16.756 - 16.873: 98.9920% ( 2) 00:12:41.542 16.873 - 16.989: 99.0003% ( 1) 00:12:41.542 16.989 - 17.105: 99.0170% ( 2) 00:12:41.542 17.222 - 17.338: 99.0253% ( 1) 00:12:41.542 17.804 - 17.920: 99.0337% ( 1) 00:12:41.542 18.036 - 18.153: 99.0420% ( 1) 00:12:41.542 18.153 - 18.269: 99.0753% ( 4) 00:12:41.542 18.269 - 18.385: 99.0836% ( 1) 00:12:41.542 18.385 - 18.502: 99.1086% ( 3) 00:12:41.542 18.502 - 18.618: 99.1336% ( 3) 00:12:41.542 18.618 - 18.735: 99.1836% ( 6) 00:12:41.542 18.735 - 18.851: 99.2253% ( 5) 00:12:41.542 18.851 - 18.967: 99.2336% ( 1) 00:12:41.542 18.967 - 19.084: 99.3002% ( 8) 00:12:41.542 19.084 - 19.200: 99.3252% ( 3) 00:12:41.542 19.200 - 19.316: 99.3835% ( 7) 00:12:41.542 19.316 - 19.433: 99.4252% ( 5) 00:12:41.542 19.433 - 19.549: 99.4752% ( 6) 00:12:41.542 19.549 - 19.665: 99.5168% ( 5) 00:12:41.542 19.665 - 19.782: 99.5751% ( 7) 00:12:41.542 19.782 - 19.898: 99.6251% ( 6) 00:12:41.542 19.898 - 20.015: 99.7001% ( 9) 00:12:41.542 20.015 - 20.131: 99.7501% ( 6) 00:12:41.542 20.131 - 20.247: 99.7751% ( 3) 00:12:41.542 20.364 - 20.480: 99.8084% ( 4) 00:12:41.542 20.596 - 20.713: 99.8167% ( 1) 00:12:41.542 20.713 - 20.829: 99.8417% ( 3) 00:12:41.542 20.829 - 20.945: 99.8584% ( 2) 00:12:41.542 20.945 - 21.062: 99.8667% ( 1) 00:12:41.542 21.178 - 21.295: 99.8834% ( 2) 00:12:41.542 21.295 - 21.411: 99.8917% ( 1) 00:12:41.542 21.993 - 22.109: 99.9084% ( 2) 00:12:41.542 25.600 - 25.716: 99.9167% ( 1) 00:12:41.542 26.298 - 26.415: 99.9250% ( 1) 00:12:41.542 29.207 - 29.324: 99.9334% ( 1) 00:12:41.542 43.287 - 43.520: 99.9417% ( 1) 00:12:41.542 46.545 - 46.778: 99.9500% ( 1) 00:12:41.542 54.225 - 54.458: 99.9583% ( 1) 00:12:41.542 3098.065 - 3112.960: 99.9667% ( 1) 00:12:41.542 3932.160 - 3961.949: 99.9750% ( 1) 00:12:41.542 4021.527 - 4051.316: 99.9833% ( 1) 00:12:41.542 5093.935 - 5123.724: 99.9917% ( 1) 00:12:41.542 5987.607 - 6017.396: 100.0000% ( 1) 00:12:41.542 00:12:41.542 Complete histogram 00:12:41.542 ================== 00:12:41.542 Range in us Cumulative Count 00:12:41.542 1.978 - 1.993: 0.0750% ( 9) 00:12:41.542 1.993 - 2.007: 0.5165% ( 53) 00:12:41.542 2.007 - 2.022: 0.8580% ( 41) 00:12:41.542 2.022 - 2.036: 1.0830% ( 27) 00:12:41.542 2.036 - 2.051: 2.2493% ( 140) 00:12:41.542 2.051 - 2.065: 4.1070% ( 223) 00:12:41.542 2.065 - 2.080: 6.4312% ( 279) 00:12:41.542 2.080 - 2.095: 7.5225% ( 131) 00:12:41.542 2.095 - 2.109: 13.0623% ( 665) 00:12:41.542 2.109 - 2.124: 22.9840% ( 1191) 00:12:41.542 2.124 - 2.138: 31.1146% ( 976) 00:12:41.542 2.138 - 2.153: 35.2716% ( 499) 00:12:41.542 2.153 - 2.167: 41.7777% ( 781) 00:12:41.542 2.167 - 2.182: 52.2659% ( 1259) 00:12:41.542 2.182 - 2.196: 62.2293% ( 1196) 00:12:41.542 2.196 - 2.211: 69.1519% ( 831) 00:12:41.542 2.211 - 2.225: 70.8764% ( 207) 00:12:41.542 2.225 - 2.240: 74.1503% ( 393) 00:12:41.542 2.240 - 2.255: 77.4658% ( 398) 00:12:41.542 2.255 - 2.269: 80.3066% ( 341) 00:12:41.542 2.269 - 2.284: 81.7894% ( 178) 00:12:41.542 2.284 - 2.298: 82.5142% ( 87) 00:12:41.542 2.298 - 2.313: 83.7721% ( 151) 00:12:41.542 2.313 - 2.327: 85.2133% ( 173) 00:12:41.542 2.327 - 2.342: 86.2712% ( 127) 00:12:41.543 2.342 - 2.356: 86.7461% ( 57) 00:12:41.543 2.356 - 2.371: 87.4125% ( 80) 00:12:41.543 2.371 - 2.385: 89.8284% ( 290) 00:12:41.543 2.385 - 2.400: 91.9527% ( 255) 00:12:41.543 2.400 - 2.415: 93.5355% ( 190) 00:12:41.543 2.415 - 2.429: 94.0103% ( 57) 00:12:41.543 2.429 - 2.444: 94.3352% ( 39) 00:12:41.543 2.444 - 2.458: 94.6185% ( 34) 00:12:41.543 2.458 - 2.473: 95.0933% ( 57) 00:12:41.543 2.473 - 2.487: 95.4432% ( 42) 00:12:41.543 2.487 - 2.502: 95.6764% ( 28) 00:12:41.543 2.502 - 2.516: 95.8680% ( 23) 00:12:41.543 2.516 - 2.531: 95.9847% ( 14) 00:12:41.543 2.531 - 2.545: 96.1346% ( 18) 00:12:41.543 2.545 - 2.560: 96.2679% ( 16) 00:12:41.543 2.560 - 2.575: 96.3179% ( 6) 00:12:41.543 2.575 - 2.589: 96.3679% ( 6) 00:12:41.543 2.589 - 2.604: 96.4429% ( 9) 00:12:41.543 2.604 - 2.618: 96.5095% ( 8) 00:12:41.543 2.618 - 2.633: 96.5511% ( 5) 00:12:41.543 2.633 - 2.647: 96.5845% ( 4) 00:12:41.543 2.647 - 2.662: 96.6594% ( 9) 00:12:41.543 2.662 - 2.676: 96.6928% ( 4) 00:12:41.543 2.676 - 2.691: 96.7094% ( 2) 00:12:41.543 2.691 - 2.705: 96.7261% ( 2) 00:12:41.543 2.705 - 2.720: 96.7344% ( 1) 00:12:41.543 2.720 - 2.735: 96.7428% ( 1) 00:12:41.543 2.735 - 2.749: 96.7844% ( 5) 00:12:41.543 2.764 - 2.778: 96.8011% ( 2) 00:12:41.543 2.807 - 2.822: 96.8261% ( 3) 00:12:41.543 2.851 - 2.865: 96.8344% ( 1) 00:12:41.543 3.447 - 3.462: 96.8427% ( 1) 00:12:41.543 3.505 - 3.520: 96.8510% ( 1) 00:12:41.543 3.564 - 3.578: 96.8594% ( 1) 00:12:41.543 3.593 - 3.607: 96.8677% ( 1) 00:12:41.543 3.607 - 3.622: 96.8760% ( 1) 00:12:41.543 3.636 - 3.651: 96.8844% ( 1) 00:12:41.543 3.651 - 3.665: 96.9010% ( 2) 00:12:41.543 3.665 - 3.680: 96.9094% ( 1) 00:12:41.543 3.709 - 3.724: 96.9177% ( 1) 00:12:41.543 3.724 - 3.753: 96.9260% ( 1) 00:12:41.543 3.753 - 3.782: 96.9427% ( 2) 00:12:41.543 3.782 - 3.811: 96.9760% ( 4) 00:12:41.543 3.811 - 3.840: 96.9843% ( 1) 00:12:41.543 3.840 - 3.869: 97.0010% ( 2) 00:12:41.543 3.869 - 3.898: 97.0177% ( 2) 00:12:41.543 4.044 - 4.073: 97.0260% ( 1) 00:12:41.543 4.160 - 4.189: 97.0427% ( 2) 00:12:41.543 4.247 - 4.276: 97.0510% ( 1) 00:12:41.543 4.364 - 4.393: 97.0593% ( 1) 00:12:41.543 4.422 - 4.451: 97.0676% ( 1) 00:12:41.543 4.509 - 4.538: 97.0760% ( 1) 00:12:41.543 4.538 - 4.567: 97.0843% ( 1) 00:12:41.543 4.596 - 4.625: 97.0926% ( 1) 00:12:41.543 4.625 - 4.655: 97.1010% ( 1) 00:12:41.543 5.236 - 5.265: 97.1093% ( 1) 00:12:41.543 5.964 - 5.993: 97.1176% ( 1) 00:12:41.543 6.575 - 6.604: 97.1260% ( 1) 00:12:41.543 6.662 - 6.691: 97.1343% ( 1) 00:12:41.543 6.778 - 6.807: 97.1426% ( 1) 00:12:41.543 6.953 - 6.982: 97.1509% ( 1) 00:12:41.543 6.982 - 7.011: 97.1593% ( 1) 00:12:41.543 7.098 - 7.127: 97.1676% ( 1) 00:12:41.543 7.127 - 7.156: 97.1843% ( 2) 00:12:41.543 7.156 - 7.185: 97.1926% ( 1) 00:12:41.543 7.185 - 7.215: 97.2009% ( 1) 00:12:41.543 7.244 - 7.273: 97.2176% ( 2) 00:12:41.543 7.273 - 7.302: 97.2259% ( 1) 00:12:41.543 7.331 - 7.360: 97.2343% ( 1) 00:12:41.543 7.389 - 7.418: 97.2426% ( 1) 00:12:41.543 7.447 - 7.505: 97.2509% ( 1) 00:12:41.543 7.505 - 7.564: 97.2676% ( 2) 00:12:41.543 7.622 - 7.680: 97.2759% ( 1) 00:12:41.543 7.680 - 7.738: 97.2842% ( 1) 00:12:41.543 7.738 - 7.796: 97.3009% ( 2) 00:12:41.543 7.796 - 7.855: 97.3092% ( 1) 00:12:41.543 7.971 - 8.029: 97.3176% ( 1) 00:12:41.543 8.029 - 8.087: 97.3259% ( 1) 00:12:41.543 8.378 - 8.436: 97.3342% ( 1) 00:12:41.543 8.495 - 8.553: 97.3509% ( 2) 00:12:41.543 8.553 - 8.611: 97.3592% ( 1) 00:12:41.543 8.844 - 8.902: 97.3675% ( 1) 00:12:41.543 9.135 - 9.193: 97.3759% ( 1) 00:12:41.543 9.309 - 9.367: 97.3842% ( 1) 00:12:41.543 9.425 - 9.484: 97.3925% ( 1) 00:12:41.543 9.484 - 9.542: 97.4009% ( 1) 00:12:41.543 9.542 - 9.600: 97.4175% ( 2) 00:12:41.543 9.600 - 9.658: 97.4259% ( 1) 00:12:41.543 9.833 - 9.891: 97.4342% ( 1) 00:12:41.543 10.065 - 10.124: 97.4425% ( 1) 00:12:41.543 10.124 - 10.182: 97.4508% ( 1) 00:12:41.543 11.636 - 11.695: 97.4592% ( 1) 00:12:41.543 11.927 - 11.985: 97.4675% ( 1) 00:12:41.543 12.044 - 12.102: 97.4758% ( 1) 00:12:41.543 12.218 - 12.276: 97.4842% ( 1) 00:12:41.543 12.335 - 12.393: 97.4925% ( 1) 00:12:41.543 12.451 - 12.509: 97.5008% ( 1) 00:12:41.543 12.684 - 12.742: 97.5175% ( 2) 00:12:41.543 13.265 - 13.324: 97.5342% ( 2) 00:12:41.543 13.324 - 13.382: 97.5425% ( 1) 00:12:41.543 13.498 - 13.556: 97.5508% ( 1) 00:12:41.543 13.556 - 13.615: 97.5591% ( 1) 00:12:41.543 13.615 - 13.673: 97.5675% ( 1) 00:12:41.543 13.847 - 13.905: 97.5758% ( 1) 00:12:41.543 13.964 - 14.022: 97.5841% ( 1) 00:12:41.543 14.196 - 14.255: 97.5925% ( 1) 00:12:41.543 16.407 - 16.524: 97.6341% ( 5) 00:12:41.543 16.524 - 16.640: 97.6674% ( 4) 00:12:41.543 16.640 - 16.756: 97.7341% ( 8) 00:12:41.543 16.756 - 16.873: 97.8257% ( 11) 00:12:41.543 16.873 - 16.989: 97.9174% ( 11) 00:12:41.543 16.989 - 17.105: 97.9757% ( 7) 00:12:41.543 17.105 - 17.222: 98.0007% ( 3) 00:12:41.543 17.222 - 17.338: 98.0590% ( 7) 00:12:41.543 17.338 - 17.455: 98.1506% ( 11) 00:12:41.543 17.455 - 17.571: 98.2173% ( 8) 00:12:41.543 17.571 - 17.687: 98.2839% ( 8) 00:12:41.543 17.687 - 17.804: 98.3755% ( 11) 00:12:41.543 17.804 - 17.920: 98.4922% ( 14) 00:12:41.543 17.920 - 18.036: 98.5338% ( 5) 00:12:41.543 18.036 - 18.153: 98.6171% ( 10) 00:12:41.543 18.153 - 18.269: 98.6921% ( 9) 00:12:41.543 18.269 - 18.385: 98.7338% ( 5) 00:12:41.543 18.385 - 18.502: 98.7421% ( 1) 00:12:41.543 18.502 - 18.618: 98.7504% ( 1) 00:12:41.543 18.618 - 18.735: 98.7671% ( 2) 00:12:41.543 18.735 - 18.851: 98.7921% ( 3) 00:12:41.543 18.851 - 18.967: 98.8254% ( 4) 00:12:41.543 18.967 - 19.084: 98.8587% ( 4) 00:12:41.543 19.084 - 19.200: 98.9170% ( 7) 00:12:41.543 19.200 - 19.316: 98.9587% ( 5) 00:12:41.543 19.316 - 19.433: 98.9670% ( 1) 00:12:41.543 19.433 - 19.549: 98.9837% ( 2) 00:12:41.543 19.549 - 19.665: 99.0003% ( 2) 00:12:41.543 19.782 - 19.898: 99.0087% ( 1) 00:12:41.543 20.131 - 20.247: 99.0170% ( 1) 00:12:41.543 22.225 - 22.342: 99.0253% ( 1) 00:12:41.543 22.342 - 22.458: 99.0420% ( 2) 00:12:41.543 22.458 - 22.575: 99.0503% ( 1) 00:12:41.543 23.156 - 23.273: 99.0670% ( 2) 00:12:41.543 23.389 - 23.505: 99.0753% ( 1) 00:12:41.543 25.716 - 25.833: 99.0836% ( 1) 00:12:41.543 28.276 - 28.393: 99.0920% ( 1) 00:12:41.543 28.393 - 28.509: 99.1003% ( 1) 00:12:41.543 38.633 - 38.865: 99.1086% ( 1) 00:12:41.543 39.796 - 40.029: 99.1170% ( 1) 00:12:41.543 945.804 - 949.527: 99.1253% ( 1) 00:12:41.543 953.251 - 960.698: 99.1336% ( 1) 00:12:41.543 968.145 - 975.593: 99.1420% ( 1) 00:12:41.543 975.593 - 983.040: 99.1503% ( 1) 00:12:41.543 983.040 - 990.487: 99.1586% ( 1) 00:12:41.543 1012.829 - 1020.276: 99.1919% ( 4) 00:12:41.543 1027.724 - 1035.171: 99.2003% ( 1) 00:12:41.543 1042.618 - 1050.065: 99.2169% ( 2) 00:12:41.543 1050.065 - 1057.513: 99.2253% ( 1) 00:12:41.543 1057.513 - 1064.960: 99.2336% ( 1) 00:12:41.543 1109.644 - 1117.091: 99.2419% ( 1) 00:12:41.543 1966.080 - 1980.975: 99.2502% ( 1) 00:12:41.543 1980.975 - 1995.869: 99.2586% ( 1) 00:12:41.543 1995.869 - 2010.764: 99.2669% ( 1) 00:12:41.543 2010.764 - 2025.658: 99.2836% ( 2) 00:12:41.543 2040.553 - 2055.447: 99.2919% ( 1) 00:12:41.543 2070.342 - 2085.236: 99.3002% ( 1) 00:12:41.543 2085.236 - 2100.131: 99.3086% ( 1) 00:12:41.543 2889.542 - 2904.436: 99.3169% ( 1) 00:12:41.543 2934.225 - 2949.120: 99.3252% ( 1) 00:12:41.543 2978.909 - 2993.804: 99.3336% ( 1) 00:12:41.543 2993.804 - 3008.698: 99.3502% ( 2) 00:12:41.543 3008.698 - 3023.593: 99.3585% ( 1) 00:12:41.543 3023.593 - 3038.487: 99.4085% ( 6) 00:12:41.543 3038.487 - 3053.382: 99.4419% ( 4) 00:12:41.543 3053.382 - 3068.276: 99.4668% ( 3) 00:12:41.543 3098.065 - 3112.960: 99.4752% ( 1) 00:12:41.543 3172.538 - 3187.433: 99.4918% ( 2) 00:12:41.543 3872.582 - 3902.371: 99.5085% ( 2) 00:12:41.543 3902.371 - 3932.160: 99.5168% ( 1) 00:12:41.543 3932.160 - 3961.949: 99.5335% ( 2) 00:12:41.543 3961.949 - 3991.738: 99.5918% ( 7) 00:12:41.543 3991.738 - 4021.527: 99.7334% ( 17) 00:12:41.543 4021.527 - 4051.316: 99.8334% ( 12) 00:12:41.543 4051.316 - 4081.105: 99.8834% ( 6) 00:12:41.543 4110.895 - 4140.684: 99.8917% ( 1) 00:12:41.543 4140.684 - 4170.473: 99.9000% ( 1) 00:12:41.543 4944.989 - 4974.778: 99.9084% ( 1) 00:12:41.543 4974.778 - 5004.567: 99.9167% ( 1) 00:12:41.543 5004.567 - 5034.356: 99.9334% ( 2) 00:12:41.543 5034.356 - 5064.145: 99.9500% ( 2) 00:12:41.543 5987.607 - 6017.396: 99.9833% ( 4) 00:12:41.543 6017.396 - 6047.185: 99.9917% ( 1) 00:12:41.543 6076.975 - 6106.764: 100.0000% ( 1) 00:12:41.543 00:12:41.544 18:07:39 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:41.544 18:07:39 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:41.544 18:07:39 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:41.544 18:07:39 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:41.544 18:07:39 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:41.802 [2024-04-25 18:07:39.587046] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:41.802 [ 00:12:41.802 { 00:12:41.802 "allow_any_host": true, 00:12:41.802 "hosts": [], 00:12:41.802 "listen_addresses": [], 00:12:41.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.802 "subtype": "Discovery" 00:12:41.802 }, 00:12:41.802 { 00:12:41.802 "allow_any_host": true, 00:12:41.802 "hosts": [], 00:12:41.802 "listen_addresses": [ 00:12:41.802 { 00:12:41.802 "adrfam": "IPv4", 00:12:41.802 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:41.802 "transport": "VFIOUSER", 00:12:41.802 "trsvcid": "0", 00:12:41.802 "trtype": "VFIOUSER" 00:12:41.802 } 00:12:41.802 ], 00:12:41.802 "max_cntlid": 65519, 00:12:41.802 "max_namespaces": 32, 00:12:41.802 "min_cntlid": 1, 00:12:41.802 "model_number": "SPDK bdev Controller", 00:12:41.802 "namespaces": [ 00:12:41.802 { 00:12:41.802 "bdev_name": "Malloc1", 00:12:41.802 "name": "Malloc1", 00:12:41.802 "nguid": "A112291EFD2B49CC99F9437755D235CC", 00:12:41.802 "nsid": 1, 00:12:41.802 "uuid": "a112291e-fd2b-49cc-99f9-437755d235cc" 00:12:41.802 } 00:12:41.802 ], 00:12:41.802 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:41.802 "serial_number": "SPDK1", 00:12:41.802 "subtype": "NVMe" 00:12:41.802 }, 00:12:41.802 { 00:12:41.802 "allow_any_host": true, 00:12:41.802 "hosts": [], 00:12:41.802 "listen_addresses": [ 00:12:41.802 { 00:12:41.802 "adrfam": "IPv4", 00:12:41.802 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:41.802 "transport": "VFIOUSER", 00:12:41.802 "trsvcid": "0", 00:12:41.802 "trtype": "VFIOUSER" 00:12:41.802 } 00:12:41.802 ], 00:12:41.802 "max_cntlid": 65519, 00:12:41.802 "max_namespaces": 32, 00:12:41.802 "min_cntlid": 1, 00:12:41.802 "model_number": "SPDK bdev Controller", 00:12:41.802 "namespaces": [ 00:12:41.802 { 00:12:41.802 "bdev_name": "Malloc2", 00:12:41.802 "name": "Malloc2", 00:12:41.802 "nguid": "E3BB852B5332463390C5BA95B0C3208B", 00:12:41.802 "nsid": 1, 00:12:41.802 "uuid": "e3bb852b-5332-4633-90c5-ba95b0c3208b" 00:12:41.802 } 00:12:41.802 ], 00:12:41.802 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:41.802 "serial_number": "SPDK2", 00:12:41.802 "subtype": "NVMe" 00:12:41.802 } 00:12:41.802 ] 00:12:41.802 18:07:39 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:41.802 18:07:39 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69609 00:12:41.802 18:07:39 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:41.802 18:07:39 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:41.802 18:07:39 -- common/autotest_common.sh@1244 -- # local i=0 00:12:41.802 18:07:39 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.802 18:07:39 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:12:41.802 18:07:39 -- common/autotest_common.sh@1247 -- # i=1 00:12:41.802 18:07:39 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:12:41.802 18:07:39 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.802 18:07:39 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:12:41.802 18:07:39 -- common/autotest_common.sh@1247 -- # i=2 00:12:41.802 18:07:39 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:12:42.060 18:07:39 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.060 18:07:39 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:12:42.060 18:07:39 -- common/autotest_common.sh@1247 -- # i=3 00:12:42.060 18:07:39 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:12:42.061 18:07:39 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.061 18:07:39 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.061 18:07:39 -- common/autotest_common.sh@1255 -- # return 0 00:12:42.061 18:07:39 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:42.061 18:07:39 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:42.318 Malloc3 00:12:42.318 18:07:40 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:42.884 Asynchronous Event Request test 00:12:42.884 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.884 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.884 Registering asynchronous event callbacks... 00:12:42.884 Starting namespace attribute notice tests for all controllers... 00:12:42.884 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:42.884 aer_cb - Changed Namespace 00:12:42.884 Cleaning up... 00:12:42.884 [ 00:12:42.884 { 00:12:42.884 "allow_any_host": true, 00:12:42.884 "hosts": [], 00:12:42.884 "listen_addresses": [], 00:12:42.884 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.884 "subtype": "Discovery" 00:12:42.884 }, 00:12:42.884 { 00:12:42.884 "allow_any_host": true, 00:12:42.884 "hosts": [], 00:12:42.884 "listen_addresses": [ 00:12:42.884 { 00:12:42.884 "adrfam": "IPv4", 00:12:42.884 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:42.884 "transport": "VFIOUSER", 00:12:42.884 "trsvcid": "0", 00:12:42.884 "trtype": "VFIOUSER" 00:12:42.884 } 00:12:42.884 ], 00:12:42.884 "max_cntlid": 65519, 00:12:42.884 "max_namespaces": 32, 00:12:42.884 "min_cntlid": 1, 00:12:42.884 "model_number": "SPDK bdev Controller", 00:12:42.884 "namespaces": [ 00:12:42.884 { 00:12:42.884 "bdev_name": "Malloc1", 00:12:42.884 "name": "Malloc1", 00:12:42.884 "nguid": "A112291EFD2B49CC99F9437755D235CC", 00:12:42.884 "nsid": 1, 00:12:42.884 "uuid": "a112291e-fd2b-49cc-99f9-437755d235cc" 00:12:42.884 }, 00:12:42.884 { 00:12:42.884 "bdev_name": "Malloc3", 00:12:42.884 "name": "Malloc3", 00:12:42.884 "nguid": "D0721B99ABF442AE96963461690F8CE0", 00:12:42.884 "nsid": 2, 00:12:42.884 "uuid": "d0721b99-abf4-42ae-9696-3461690f8ce0" 00:12:42.884 } 00:12:42.884 ], 00:12:42.884 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:42.884 "serial_number": "SPDK1", 00:12:42.884 "subtype": "NVMe" 00:12:42.884 }, 00:12:42.884 { 00:12:42.884 "allow_any_host": true, 00:12:42.884 "hosts": [], 00:12:42.884 "listen_addresses": [ 00:12:42.884 { 00:12:42.884 "adrfam": "IPv4", 00:12:42.884 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:42.884 "transport": "VFIOUSER", 00:12:42.884 "trsvcid": "0", 00:12:42.884 "trtype": "VFIOUSER" 00:12:42.884 } 00:12:42.884 ], 00:12:42.884 "max_cntlid": 65519, 00:12:42.884 "max_namespaces": 32, 00:12:42.884 "min_cntlid": 1, 00:12:42.884 "model_number": "SPDK bdev Controller", 00:12:42.884 "namespaces": [ 00:12:42.884 { 00:12:42.884 "bdev_name": "Malloc2", 00:12:42.884 "name": "Malloc2", 00:12:42.884 "nguid": "E3BB852B5332463390C5BA95B0C3208B", 00:12:42.884 "nsid": 1, 00:12:42.884 "uuid": "e3bb852b-5332-4633-90c5-ba95b0c3208b" 00:12:42.884 } 00:12:42.884 ], 00:12:42.884 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:42.884 "serial_number": "SPDK2", 00:12:42.884 "subtype": "NVMe" 00:12:42.884 } 00:12:42.884 ] 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@44 -- # wait 69609 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:42.884 18:07:40 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:42.884 [2024-04-25 18:07:40.808899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:42.884 [2024-04-25 18:07:40.808960] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69647 ] 00:12:43.143 [2024-04-25 18:07:40.948201] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:43.143 [2024-04-25 18:07:40.963671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.143 [2024-04-25 18:07:40.963726] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa5f9c31000 00:12:43.143 [2024-04-25 18:07:40.964671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.965682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.966682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.967690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.968698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.969708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.970707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.971720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.143 [2024-04-25 18:07:40.972716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.143 [2024-04-25 18:07:40.972742] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa5f9c26000 00:12:43.143 [2024-04-25 18:07:40.973833] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.144 [2024-04-25 18:07:40.990756] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:43.144 [2024-04-25 18:07:40.990797] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:43.144 [2024-04-25 18:07:40.992906] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.144 [2024-04-25 18:07:40.992995] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:43.144 [2024-04-25 18:07:40.993092] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:43.144 [2024-04-25 18:07:40.993119] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:43.144 [2024-04-25 18:07:40.993127] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:43.144 [2024-04-25 18:07:40.993883] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:43.144 [2024-04-25 18:07:40.993914] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:43.144 [2024-04-25 18:07:40.993926] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:43.144 [2024-04-25 18:07:40.994880] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.144 [2024-04-25 18:07:40.994909] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:43.144 [2024-04-25 18:07:40.994922] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:40.995889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:43.144 [2024-04-25 18:07:40.995926] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:40.996889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:43.144 [2024-04-25 18:07:40.996912] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:43.144 [2024-04-25 18:07:40.996919] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:40.996928] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:40.997044] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:43.144 [2024-04-25 18:07:40.997050] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:40.997057] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:43.144 [2024-04-25 18:07:40.997901] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:43.144 [2024-04-25 18:07:40.998907] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:43.144 [2024-04-25 18:07:40.999915] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.144 [2024-04-25 18:07:41.000961] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:43.144 [2024-04-25 18:07:41.001933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:43.144 [2024-04-25 18:07:41.001969] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:43.144 [2024-04-25 18:07:41.001976] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.001996] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:43.144 [2024-04-25 18:07:41.002016] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.002035] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.144 [2024-04-25 18:07:41.002041] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.144 [2024-04-25 18:07:41.002056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.008347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.008384] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:43.144 [2024-04-25 18:07:41.008397] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:43.144 [2024-04-25 18:07:41.008402] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:43.144 [2024-04-25 18:07:41.008407] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:43.144 [2024-04-25 18:07:41.008412] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:43.144 [2024-04-25 18:07:41.008417] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:43.144 [2024-04-25 18:07:41.008423] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.008435] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.008448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.016284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.016310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.144 [2024-04-25 18:07:41.016320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.144 [2024-04-25 18:07:41.016328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.144 [2024-04-25 18:07:41.016336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.144 [2024-04-25 18:07:41.016342] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.016356] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.016367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.024320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.024345] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:43.144 [2024-04-25 18:07:41.024361] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.024371] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.024383] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.024396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.031344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.031425] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.031438] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.031449] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:43.144 [2024-04-25 18:07:41.031455] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:43.144 [2024-04-25 18:07:41.031463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.039333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.039383] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:43.144 [2024-04-25 18:07:41.039400] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.039410] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.039419] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.144 [2024-04-25 18:07:41.039424] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.144 [2024-04-25 18:07:41.039431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.047319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.047370] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.047382] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:43.144 [2024-04-25 18:07:41.047392] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.144 [2024-04-25 18:07:41.047397] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.144 [2024-04-25 18:07:41.047404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.144 [2024-04-25 18:07:41.055334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:43.144 [2024-04-25 18:07:41.055356] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055366] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055383] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055390] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055395] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055401] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:43.145 [2024-04-25 18:07:41.055405] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:43.145 [2024-04-25 18:07:41.055411] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:43.145 [2024-04-25 18:07:41.055436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:43.145 [2024-04-25 18:07:41.063315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:43.145 [2024-04-25 18:07:41.063368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:43.145 [2024-04-25 18:07:41.071350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:43.145 [2024-04-25 18:07:41.071388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:43.403 [2024-04-25 18:07:41.079319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:43.403 [2024-04-25 18:07:41.079344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.403 [2024-04-25 18:07:41.087283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:43.403 [2024-04-25 18:07:41.087323] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:43.403 [2024-04-25 18:07:41.087329] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:43.403 [2024-04-25 18:07:41.087333] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:43.403 [2024-04-25 18:07:41.087337] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:43.403 [2024-04-25 18:07:41.087344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:43.403 [2024-04-25 18:07:41.087352] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:43.403 [2024-04-25 18:07:41.087356] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:43.403 [2024-04-25 18:07:41.087362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:43.403 [2024-04-25 18:07:41.087370] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:43.403 [2024-04-25 18:07:41.087374] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.403 [2024-04-25 18:07:41.087380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.403 [2024-04-25 18:07:41.087389] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:43.403 [2024-04-25 18:07:41.087393] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:43.403 [2024-04-25 18:07:41.087399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:43.403 ===================================================== 00:12:43.403 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.403 ===================================================== 00:12:43.403 Controller Capabilities/Features 00:12:43.403 ================================ 00:12:43.403 Vendor ID: 4e58 00:12:43.403 Subsystem Vendor ID: 4e58 00:12:43.403 Serial Number: SPDK2 00:12:43.403 Model Number: SPDK bdev Controller 00:12:43.403 Firmware Version: 24.01.1 00:12:43.403 Recommended Arb Burst: 6 00:12:43.403 IEEE OUI Identifier: 8d 6b 50 00:12:43.403 Multi-path I/O 00:12:43.403 May have multiple subsystem ports: Yes 00:12:43.403 May have multiple controllers: Yes 00:12:43.403 Associated with SR-IOV VF: No 00:12:43.403 Max Data Transfer Size: 131072 00:12:43.403 Max Number of Namespaces: 32 00:12:43.403 Max Number of I/O Queues: 127 00:12:43.403 NVMe Specification Version (VS): 1.3 00:12:43.403 NVMe Specification Version (Identify): 1.3 00:12:43.403 Maximum Queue Entries: 256 00:12:43.403 Contiguous Queues Required: Yes 00:12:43.403 Arbitration Mechanisms Supported 00:12:43.403 Weighted Round Robin: Not Supported 00:12:43.403 Vendor Specific: Not Supported 00:12:43.403 Reset Timeout: 15000 ms 00:12:43.403 Doorbell Stride: 4 bytes 00:12:43.403 NVM Subsystem Reset: Not Supported 00:12:43.403 Command Sets Supported 00:12:43.403 NVM Command Set: Supported 00:12:43.403 Boot Partition: Not Supported 00:12:43.403 Memory Page Size Minimum: 4096 bytes 00:12:43.403 Memory Page Size Maximum: 4096 bytes 00:12:43.403 Persistent Memory Region: Not Supported 00:12:43.403 Optional Asynchronous Events Supported 00:12:43.403 Namespace Attribute Notices: Supported 00:12:43.403 Firmware Activation Notices: Not Supported 00:12:43.403 ANA Change Notices: Not Supported 00:12:43.403 PLE Aggregate Log Change Notices: Not Supported 00:12:43.403 LBA Status Info Alert Notices: Not Supported 00:12:43.404 EGE Aggregate Log Change Notices: Not Supported 00:12:43.404 Normal NVM Subsystem Shutdown event: Not Supported 00:12:43.404 Zone Descriptor Change Notices: Not Supported 00:12:43.404 Discovery Log Change Notices: Not Supported 00:12:43.404 Controller Attributes 00:12:43.404 128-bit Host Identifier: Supported 00:12:43.404 Non-Operational Permissive Mode: Not Supported 00:12:43.404 NVM Sets: Not Supported 00:12:43.404 Read Recovery Levels: Not Supported 00:12:43.404 Endurance Groups: Not Supported 00:12:43.404 Predictable Latency Mode: Not Supported 00:12:43.404 Traffic Based Keep ALive: Not Supported 00:12:43.404 Namespace Granularity: Not Supported 00:12:43.404 SQ Associations: Not Supported 00:12:43.404 UUID List: Not Supported 00:12:43.404 Multi-Domain Subsystem: Not Supported 00:12:43.404 Fixed Capacity Management: Not Supported 00:12:43.404 Variable Capacity Management: Not Supported 00:12:43.404 Delete Endurance Group: Not Supported 00:12:43.404 Delete NVM Set: Not Supported 00:12:43.404 Extended LBA Formats Supported: Not Supported 00:12:43.404 Flexible Data Placement Supported: Not Supported 00:12:43.404 00:12:43.404 Controller Memory Buffer Support 00:12:43.404 ================================ 00:12:43.404 Supported: No 00:12:43.404 00:12:43.404 Persistent Memory Region Support 00:12:43.404 ================================ 00:12:43.404 Supported: No 00:12:43.404 00:12:43.404 Admin Command Set Attributes 00:12:43.404 ============================ 00:12:43.404 Security Send/Receive: Not Supported 00:12:43.404 Format NVM: Not Supported 00:12:43.404 Firmware Activate/Download: Not Supported 00:12:43.404 Namespace Management: Not Supported 00:12:43.404 Device Self-Test: Not Supported 00:12:43.404 Directives: Not Supported 00:12:43.404 NVMe-MI: Not Supported 00:12:43.404 Virtualization Management: Not Supported 00:12:43.404 Doorbell Buffer Config: Not Supported 00:12:43.404 Get LBA Status Capability: Not Supported 00:12:43.404 Command & Feature Lockdown Capability: Not Supported 00:12:43.404 Abort Command Limit: 4 00:12:43.404 Async Event Request Limit: 4 00:12:43.404 Number of Firmware Slots: N/A 00:12:43.404 Firmware Slot 1 Read-Only: N/A 00:12:43.404 Firmware Activation Wit[2024-04-25 18:07:41.093369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.093403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.093416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.093424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:43.404 hout Reset: N/A 00:12:43.404 Multiple Update Detection Support: N/A 00:12:43.404 Firmware Update Granularity: No Information Provided 00:12:43.404 Per-Namespace SMART Log: No 00:12:43.404 Asymmetric Namespace Access Log Page: Not Supported 00:12:43.404 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:43.404 Command Effects Log Page: Supported 00:12:43.404 Get Log Page Extended Data: Supported 00:12:43.404 Telemetry Log Pages: Not Supported 00:12:43.404 Persistent Event Log Pages: Not Supported 00:12:43.404 Supported Log Pages Log Page: May Support 00:12:43.404 Commands Supported & Effects Log Page: Not Supported 00:12:43.404 Feature Identifiers & Effects Log Page:May Support 00:12:43.404 NVMe-MI Commands & Effects Log Page: May Support 00:12:43.404 Data Area 4 for Telemetry Log: Not Supported 00:12:43.404 Error Log Page Entries Supported: 128 00:12:43.404 Keep Alive: Supported 00:12:43.404 Keep Alive Granularity: 10000 ms 00:12:43.404 00:12:43.404 NVM Command Set Attributes 00:12:43.404 ========================== 00:12:43.404 Submission Queue Entry Size 00:12:43.404 Max: 64 00:12:43.404 Min: 64 00:12:43.404 Completion Queue Entry Size 00:12:43.404 Max: 16 00:12:43.404 Min: 16 00:12:43.404 Number of Namespaces: 32 00:12:43.404 Compare Command: Supported 00:12:43.404 Write Uncorrectable Command: Not Supported 00:12:43.404 Dataset Management Command: Supported 00:12:43.404 Write Zeroes Command: Supported 00:12:43.404 Set Features Save Field: Not Supported 00:12:43.404 Reservations: Not Supported 00:12:43.404 Timestamp: Not Supported 00:12:43.404 Copy: Supported 00:12:43.404 Volatile Write Cache: Present 00:12:43.404 Atomic Write Unit (Normal): 1 00:12:43.404 Atomic Write Unit (PFail): 1 00:12:43.404 Atomic Compare & Write Unit: 1 00:12:43.404 Fused Compare & Write: Supported 00:12:43.404 Scatter-Gather List 00:12:43.404 SGL Command Set: Supported (Dword aligned) 00:12:43.404 SGL Keyed: Not Supported 00:12:43.404 SGL Bit Bucket Descriptor: Not Supported 00:12:43.404 SGL Metadata Pointer: Not Supported 00:12:43.404 Oversized SGL: Not Supported 00:12:43.404 SGL Metadata Address: Not Supported 00:12:43.404 SGL Offset: Not Supported 00:12:43.404 Transport SGL Data Block: Not Supported 00:12:43.404 Replay Protected Memory Block: Not Supported 00:12:43.404 00:12:43.404 Firmware Slot Information 00:12:43.404 ========================= 00:12:43.404 Active slot: 1 00:12:43.404 Slot 1 Firmware Revision: 24.01.1 00:12:43.404 00:12:43.404 00:12:43.404 Commands Supported and Effects 00:12:43.404 ============================== 00:12:43.404 Admin Commands 00:12:43.404 -------------- 00:12:43.404 Get Log Page (02h): Supported 00:12:43.404 Identify (06h): Supported 00:12:43.404 Abort (08h): Supported 00:12:43.404 Set Features (09h): Supported 00:12:43.404 Get Features (0Ah): Supported 00:12:43.404 Asynchronous Event Request (0Ch): Supported 00:12:43.404 Keep Alive (18h): Supported 00:12:43.404 I/O Commands 00:12:43.404 ------------ 00:12:43.404 Flush (00h): Supported LBA-Change 00:12:43.404 Write (01h): Supported LBA-Change 00:12:43.404 Read (02h): Supported 00:12:43.404 Compare (05h): Supported 00:12:43.404 Write Zeroes (08h): Supported LBA-Change 00:12:43.404 Dataset Management (09h): Supported LBA-Change 00:12:43.404 Copy (19h): Supported LBA-Change 00:12:43.404 Unknown (79h): Supported LBA-Change 00:12:43.404 Unknown (7Ah): Supported 00:12:43.404 00:12:43.404 Error Log 00:12:43.404 ========= 00:12:43.404 00:12:43.404 Arbitration 00:12:43.404 =========== 00:12:43.404 Arbitration Burst: 1 00:12:43.404 00:12:43.404 Power Management 00:12:43.404 ================ 00:12:43.404 Number of Power States: 1 00:12:43.404 Current Power State: Power State #0 00:12:43.404 Power State #0: 00:12:43.404 Max Power: 0.00 W 00:12:43.404 Non-Operational State: Operational 00:12:43.404 Entry Latency: Not Reported 00:12:43.404 Exit Latency: Not Reported 00:12:43.404 Relative Read Throughput: 0 00:12:43.404 Relative Read Latency: 0 00:12:43.404 Relative Write Throughput: 0 00:12:43.404 Relative Write Latency: 0 00:12:43.404 Idle Power: Not Reported 00:12:43.404 Active Power: Not Reported 00:12:43.404 Non-Operational Permissive Mode: Not Supported 00:12:43.404 00:12:43.404 Health Information 00:12:43.404 ================== 00:12:43.404 Critical Warnings: 00:12:43.404 Available Spare Space: OK 00:12:43.404 Temperature: OK 00:12:43.404 Device Reliability: OK 00:12:43.404 Read Only: No 00:12:43.404 Volatile Memory Backup: OK 00:12:43.404 Current Temperature: 0 Kelvin[2024-04-25 18:07:41.093557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:43.404 [2024-04-25 18:07:41.101288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.101362] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:43.404 [2024-04-25 18:07:41.101377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.101397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.101403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.101410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.404 [2024-04-25 18:07:41.101498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.404 [2024-04-25 18:07:41.101516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:43.404 [2024-04-25 18:07:41.102544] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:43.404 [2024-04-25 18:07:41.102564] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:43.404 [2024-04-25 18:07:41.103487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:43.404 [2024-04-25 18:07:41.103524] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:43.405 [2024-04-25 18:07:41.103774] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:43.405 [2024-04-25 18:07:41.106316] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.405 (-273 Celsius) 00:12:43.405 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:43.405 Available Spare: 0% 00:12:43.405 Available Spare Threshold: 0% 00:12:43.405 Life Percentage Used: 0% 00:12:43.405 Data Units Read: 0 00:12:43.405 Data Units Written: 0 00:12:43.405 Host Read Commands: 0 00:12:43.405 Host Write Commands: 0 00:12:43.405 Controller Busy Time: 0 minutes 00:12:43.405 Power Cycles: 0 00:12:43.405 Power On Hours: 0 hours 00:12:43.405 Unsafe Shutdowns: 0 00:12:43.405 Unrecoverable Media Errors: 0 00:12:43.405 Lifetime Error Log Entries: 0 00:12:43.405 Warning Temperature Time: 0 minutes 00:12:43.405 Critical Temperature Time: 0 minutes 00:12:43.405 00:12:43.405 Number of Queues 00:12:43.405 ================ 00:12:43.405 Number of I/O Submission Queues: 127 00:12:43.405 Number of I/O Completion Queues: 127 00:12:43.405 00:12:43.405 Active Namespaces 00:12:43.405 ================= 00:12:43.405 Namespace ID:1 00:12:43.405 Error Recovery Timeout: Unlimited 00:12:43.405 Command Set Identifier: NVM (00h) 00:12:43.405 Deallocate: Supported 00:12:43.405 Deallocated/Unwritten Error: Not Supported 00:12:43.405 Deallocated Read Value: Unknown 00:12:43.405 Deallocate in Write Zeroes: Not Supported 00:12:43.405 Deallocated Guard Field: 0xFFFF 00:12:43.405 Flush: Supported 00:12:43.405 Reservation: Supported 00:12:43.405 Namespace Sharing Capabilities: Multiple Controllers 00:12:43.405 Size (in LBAs): 131072 (0GiB) 00:12:43.405 Capacity (in LBAs): 131072 (0GiB) 00:12:43.405 Utilization (in LBAs): 131072 (0GiB) 00:12:43.405 NGUID: E3BB852B5332463390C5BA95B0C3208B 00:12:43.405 UUID: e3bb852b-5332-4633-90c5-ba95b0c3208b 00:12:43.405 Thin Provisioning: Not Supported 00:12:43.405 Per-NS Atomic Units: Yes 00:12:43.405 Atomic Boundary Size (Normal): 0 00:12:43.405 Atomic Boundary Size (PFail): 0 00:12:43.405 Atomic Boundary Offset: 0 00:12:43.405 Maximum Single Source Range Length: 65535 00:12:43.405 Maximum Copy Length: 65535 00:12:43.405 Maximum Source Range Count: 1 00:12:43.405 NGUID/EUI64 Never Reused: No 00:12:43.405 Namespace Write Protected: No 00:12:43.405 Number of LBA Formats: 1 00:12:43.405 Current LBA Format: LBA Format #00 00:12:43.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:43.405 00:12:43.405 18:07:41 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:48.669 Initializing NVMe Controllers 00:12:48.669 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:48.669 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:48.669 Initialization complete. Launching workers. 00:12:48.669 ======================================================== 00:12:48.669 Latency(us) 00:12:48.669 Device Information : IOPS MiB/s Average min max 00:12:48.669 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36961.80 144.38 3462.49 1066.65 10590.43 00:12:48.669 ======================================================== 00:12:48.669 Total : 36961.80 144.38 3462.49 1066.65 10590.43 00:12:48.669 00:12:48.669 18:07:46 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:55.228 Initializing NVMe Controllers 00:12:55.228 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:55.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:55.228 Initialization complete. Launching workers. 00:12:55.228 ======================================================== 00:12:55.228 Latency(us) 00:12:55.228 Device Information : IOPS MiB/s Average min max 00:12:55.228 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32854.18 128.34 3895.50 1107.44 10708.56 00:12:55.228 ======================================================== 00:12:55.228 Total : 32854.18 128.34 3895.50 1107.44 10708.56 00:12:55.228 00:12:55.228 18:07:51 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:59.412 Initializing NVMe Controllers 00:12:59.412 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.412 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:59.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:59.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:59.412 Initialization complete. Launching workers. 00:12:59.412 Starting thread on core 2 00:12:59.412 Starting thread on core 3 00:12:59.412 Starting thread on core 1 00:12:59.412 18:07:57 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:03.601 Initializing NVMe Controllers 00:13:03.601 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.601 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.601 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:03.601 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:03.601 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:03.601 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:03.601 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:03.601 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:03.601 Initialization complete. Launching workers. 00:13:03.601 Starting thread on core 1 with urgent priority queue 00:13:03.601 Starting thread on core 2 with urgent priority queue 00:13:03.601 Starting thread on core 3 with urgent priority queue 00:13:03.601 Starting thread on core 0 with urgent priority queue 00:13:03.601 SPDK bdev Controller (SPDK2 ) core 0: 5488.67 IO/s 18.22 secs/100000 ios 00:13:03.601 SPDK bdev Controller (SPDK2 ) core 1: 4447.33 IO/s 22.49 secs/100000 ios 00:13:03.601 SPDK bdev Controller (SPDK2 ) core 2: 4619.00 IO/s 21.65 secs/100000 ios 00:13:03.601 SPDK bdev Controller (SPDK2 ) core 3: 4448.33 IO/s 22.48 secs/100000 ios 00:13:03.601 ======================================================== 00:13:03.601 00:13:03.601 18:08:00 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:03.601 Initializing NVMe Controllers 00:13:03.601 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.601 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.601 Namespace ID: 1 size: 0GB 00:13:03.601 Initialization complete. 00:13:03.601 INFO: using host memory buffer for IO 00:13:03.601 Hello world! 00:13:03.601 18:08:01 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:04.538 Initializing NVMe Controllers 00:13:04.538 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.538 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.538 Initialization complete. Launching workers. 00:13:04.538 submit (in ns) avg, min, max = 8594.7, 3377.3, 5037629.1 00:13:04.538 complete (in ns) avg, min, max = 22411.0, 1848.2, 6073705.5 00:13:04.538 00:13:04.538 Submit histogram 00:13:04.538 ================ 00:13:04.538 Range in us Cumulative Count 00:13:04.538 3.375 - 3.389: 0.0069% ( 1) 00:13:04.538 3.389 - 3.404: 0.0345% ( 4) 00:13:04.538 3.404 - 3.418: 0.0552% ( 3) 00:13:04.538 3.418 - 3.433: 0.0897% ( 5) 00:13:04.538 3.433 - 3.447: 0.1311% ( 6) 00:13:04.538 3.447 - 3.462: 0.3243% ( 28) 00:13:04.538 3.462 - 3.476: 1.1455% ( 119) 00:13:04.539 3.476 - 3.491: 2.8914% ( 253) 00:13:04.539 3.491 - 3.505: 5.8036% ( 422) 00:13:04.539 3.505 - 3.520: 9.4541% ( 529) 00:13:04.539 3.520 - 3.535: 13.1806% ( 540) 00:13:04.539 3.535 - 3.549: 17.4867% ( 624) 00:13:04.539 3.549 - 3.564: 21.5444% ( 588) 00:13:04.539 3.564 - 3.578: 24.6360% ( 448) 00:13:04.539 3.578 - 3.593: 28.7213% ( 592) 00:13:04.539 3.593 - 3.607: 33.5657% ( 702) 00:13:04.539 3.607 - 3.622: 38.0581% ( 651) 00:13:04.539 3.622 - 3.636: 41.9571% ( 565) 00:13:04.539 3.636 - 3.651: 45.8146% ( 559) 00:13:04.539 3.651 - 3.665: 49.3755% ( 516) 00:13:04.539 3.665 - 3.680: 52.5499% ( 460) 00:13:04.539 3.680 - 3.695: 55.9382% ( 491) 00:13:04.539 3.695 - 3.709: 58.7882% ( 413) 00:13:04.539 3.709 - 3.724: 61.2863% ( 362) 00:13:04.539 3.724 - 3.753: 66.7863% ( 797) 00:13:04.539 3.753 - 3.782: 71.2166% ( 642) 00:13:04.539 3.782 - 3.811: 74.4048% ( 462) 00:13:04.539 3.811 - 3.840: 77.4274% ( 438) 00:13:04.539 3.840 - 3.869: 79.9324% ( 363) 00:13:04.539 3.869 - 3.898: 82.3063% ( 344) 00:13:04.539 3.898 - 3.927: 84.6663% ( 342) 00:13:04.539 3.927 - 3.956: 86.6124% ( 282) 00:13:04.539 3.956 - 3.985: 88.3238% ( 248) 00:13:04.539 3.985 - 4.015: 89.9593% ( 237) 00:13:04.539 4.015 - 4.044: 91.2773% ( 191) 00:13:04.539 4.044 - 4.073: 92.2159% ( 136) 00:13:04.539 4.073 - 4.102: 93.0923% ( 127) 00:13:04.539 4.102 - 4.131: 93.9342% ( 122) 00:13:04.539 4.131 - 4.160: 94.6242% ( 100) 00:13:04.539 4.160 - 4.189: 95.1004% ( 69) 00:13:04.539 4.189 - 4.218: 95.7008% ( 87) 00:13:04.539 4.218 - 4.247: 96.1838% ( 70) 00:13:04.539 4.247 - 4.276: 96.4668% ( 41) 00:13:04.539 4.276 - 4.305: 96.8394% ( 54) 00:13:04.539 4.305 - 4.335: 96.9981% ( 23) 00:13:04.539 4.335 - 4.364: 97.1569% ( 23) 00:13:04.539 4.364 - 4.393: 97.3018% ( 21) 00:13:04.539 4.393 - 4.422: 97.4812% ( 26) 00:13:04.539 4.422 - 4.451: 97.6054% ( 18) 00:13:04.539 4.451 - 4.480: 97.6951% ( 13) 00:13:04.539 4.480 - 4.509: 97.7710% ( 11) 00:13:04.539 4.509 - 4.538: 97.8676% ( 14) 00:13:04.539 4.538 - 4.567: 97.9367% ( 10) 00:13:04.539 4.567 - 4.596: 97.9643% ( 4) 00:13:04.539 4.596 - 4.625: 98.0264% ( 9) 00:13:04.539 4.625 - 4.655: 98.0609% ( 5) 00:13:04.539 4.655 - 4.684: 98.1299% ( 10) 00:13:04.539 4.684 - 4.713: 98.1782% ( 7) 00:13:04.539 4.713 - 4.742: 98.2058% ( 4) 00:13:04.539 4.742 - 4.771: 98.2265% ( 3) 00:13:04.539 4.771 - 4.800: 98.2541% ( 4) 00:13:04.539 4.800 - 4.829: 98.2679% ( 2) 00:13:04.539 4.858 - 4.887: 98.2817% ( 2) 00:13:04.539 4.887 - 4.916: 98.2955% ( 2) 00:13:04.539 4.916 - 4.945: 98.3093% ( 2) 00:13:04.539 4.975 - 5.004: 98.3162% ( 1) 00:13:04.539 5.004 - 5.033: 98.3300% ( 2) 00:13:04.539 5.120 - 5.149: 98.3369% ( 1) 00:13:04.539 5.149 - 5.178: 98.3507% ( 2) 00:13:04.539 6.225 - 6.255: 98.3576% ( 1) 00:13:04.539 7.564 - 7.622: 98.3645% ( 1) 00:13:04.539 7.680 - 7.738: 98.3714% ( 1) 00:13:04.539 7.738 - 7.796: 98.3783% ( 1) 00:13:04.539 7.796 - 7.855: 98.3852% ( 1) 00:13:04.539 7.855 - 7.913: 98.3921% ( 1) 00:13:04.539 7.971 - 8.029: 98.3990% ( 1) 00:13:04.539 8.029 - 8.087: 98.4128% ( 2) 00:13:04.539 8.087 - 8.145: 98.4197% ( 1) 00:13:04.539 8.320 - 8.378: 98.4266% ( 1) 00:13:04.539 8.378 - 8.436: 98.4335% ( 1) 00:13:04.539 8.436 - 8.495: 98.4404% ( 1) 00:13:04.539 8.495 - 8.553: 98.4473% ( 1) 00:13:04.539 8.669 - 8.727: 98.4611% ( 2) 00:13:04.539 8.727 - 8.785: 98.4680% ( 1) 00:13:04.539 8.785 - 8.844: 98.4749% ( 1) 00:13:04.539 8.902 - 8.960: 98.4818% ( 1) 00:13:04.539 8.960 - 9.018: 98.4887% ( 1) 00:13:04.539 9.018 - 9.076: 98.4956% ( 1) 00:13:04.539 9.076 - 9.135: 98.5025% ( 1) 00:13:04.539 9.135 - 9.193: 98.5163% ( 2) 00:13:04.539 9.251 - 9.309: 98.5301% ( 2) 00:13:04.539 9.309 - 9.367: 98.5370% ( 1) 00:13:04.539 9.367 - 9.425: 98.5439% ( 1) 00:13:04.539 9.425 - 9.484: 98.5508% ( 1) 00:13:04.539 9.600 - 9.658: 98.5577% ( 1) 00:13:04.539 9.658 - 9.716: 98.5784% ( 3) 00:13:04.539 9.775 - 9.833: 98.5853% ( 1) 00:13:04.539 9.833 - 9.891: 98.5922% ( 1) 00:13:04.539 9.891 - 9.949: 98.6060% ( 2) 00:13:04.539 10.007 - 10.065: 98.6267% ( 3) 00:13:04.539 10.065 - 10.124: 98.6336% ( 1) 00:13:04.539 10.124 - 10.182: 98.6405% ( 1) 00:13:04.539 10.182 - 10.240: 98.6474% ( 1) 00:13:04.539 10.298 - 10.356: 98.6543% ( 1) 00:13:04.539 10.473 - 10.531: 98.6612% ( 1) 00:13:04.539 10.647 - 10.705: 98.6681% ( 1) 00:13:04.539 10.996 - 11.055: 98.6819% ( 2) 00:13:04.539 11.811 - 11.869: 98.6888% ( 1) 00:13:04.539 11.869 - 11.927: 98.6957% ( 1) 00:13:04.539 13.091 - 13.149: 98.7164% ( 3) 00:13:04.539 13.149 - 13.207: 98.7302% ( 2) 00:13:04.539 13.324 - 13.382: 98.7371% ( 1) 00:13:04.539 13.382 - 13.440: 98.7440% ( 1) 00:13:04.539 13.498 - 13.556: 98.7509% ( 1) 00:13:04.539 13.731 - 13.789: 98.7578% ( 1) 00:13:04.539 13.789 - 13.847: 98.7648% ( 1) 00:13:04.539 13.847 - 13.905: 98.7717% ( 1) 00:13:04.539 13.964 - 14.022: 98.7786% ( 1) 00:13:04.539 14.022 - 14.080: 98.7993% ( 3) 00:13:04.539 14.080 - 14.138: 98.8062% ( 1) 00:13:04.539 14.138 - 14.196: 98.8200% ( 2) 00:13:04.539 14.196 - 14.255: 98.8269% ( 1) 00:13:04.539 14.255 - 14.313: 98.8407% ( 2) 00:13:04.539 14.313 - 14.371: 98.8614% ( 3) 00:13:04.539 14.371 - 14.429: 98.8752% ( 2) 00:13:04.539 14.429 - 14.487: 98.8959% ( 3) 00:13:04.539 14.487 - 14.545: 98.9097% ( 2) 00:13:04.539 14.604 - 14.662: 98.9166% ( 1) 00:13:04.539 14.662 - 14.720: 98.9304% ( 2) 00:13:04.539 14.720 - 14.778: 98.9511% ( 3) 00:13:04.539 14.778 - 14.836: 98.9649% ( 2) 00:13:04.539 14.836 - 14.895: 98.9856% ( 3) 00:13:04.539 14.895 - 15.011: 99.0201% ( 5) 00:13:04.539 15.011 - 15.127: 99.0615% ( 6) 00:13:04.539 15.127 - 15.244: 99.0960% ( 5) 00:13:04.539 15.244 - 15.360: 99.1098% ( 2) 00:13:04.539 15.360 - 15.476: 99.1581% ( 7) 00:13:04.539 15.476 - 15.593: 99.1857% ( 4) 00:13:04.539 15.593 - 15.709: 99.1995% ( 2) 00:13:04.539 15.709 - 15.825: 99.2064% ( 1) 00:13:04.539 15.825 - 15.942: 99.2133% ( 1) 00:13:04.539 15.942 - 16.058: 99.2409% ( 4) 00:13:04.539 16.058 - 16.175: 99.2616% ( 3) 00:13:04.539 16.175 - 16.291: 99.2823% ( 3) 00:13:04.539 16.291 - 16.407: 99.2961% ( 2) 00:13:04.539 16.407 - 16.524: 99.3030% ( 1) 00:13:04.539 16.524 - 16.640: 99.3306% ( 4) 00:13:04.539 16.640 - 16.756: 99.3513% ( 3) 00:13:04.539 16.756 - 16.873: 99.3582% ( 1) 00:13:04.539 16.873 - 16.989: 99.3651% ( 1) 00:13:04.539 17.105 - 17.222: 99.3789% ( 2) 00:13:04.539 17.338 - 17.455: 99.3858% ( 1) 00:13:04.539 17.571 - 17.687: 99.3927% ( 1) 00:13:04.539 17.804 - 17.920: 99.4065% ( 2) 00:13:04.539 17.920 - 18.036: 99.4479% ( 6) 00:13:04.539 18.036 - 18.153: 99.4893% ( 6) 00:13:04.539 18.153 - 18.269: 99.5031% ( 2) 00:13:04.539 18.385 - 18.502: 99.5169% ( 2) 00:13:04.539 18.502 - 18.618: 99.5376% ( 3) 00:13:04.539 18.618 - 18.735: 99.5445% ( 1) 00:13:04.539 18.735 - 18.851: 99.5790% ( 5) 00:13:04.539 18.851 - 18.967: 99.6067% ( 4) 00:13:04.539 18.967 - 19.084: 99.6481% ( 6) 00:13:04.539 19.084 - 19.200: 99.6688% ( 3) 00:13:04.539 19.200 - 19.316: 99.7171% ( 7) 00:13:04.539 19.316 - 19.433: 99.7309% ( 2) 00:13:04.539 19.433 - 19.549: 99.7516% ( 3) 00:13:04.539 19.549 - 19.665: 99.7861% ( 5) 00:13:04.539 19.665 - 19.782: 99.7999% ( 2) 00:13:04.539 19.782 - 19.898: 99.8068% ( 1) 00:13:04.539 19.898 - 20.015: 99.8206% ( 2) 00:13:04.539 20.247 - 20.364: 99.8275% ( 1) 00:13:04.539 20.364 - 20.480: 99.8344% ( 1) 00:13:04.539 20.596 - 20.713: 99.8413% ( 1) 00:13:04.539 21.062 - 21.178: 99.8482% ( 1) 00:13:04.539 21.993 - 22.109: 99.8551% ( 1) 00:13:04.539 25.949 - 26.065: 99.8620% ( 1) 00:13:04.539 26.996 - 27.113: 99.8689% ( 1) 00:13:04.539 29.324 - 29.440: 99.8758% ( 1) 00:13:04.539 1020.276 - 1027.724: 99.8827% ( 1) 00:13:04.539 2025.658 - 2040.553: 99.8896% ( 1) 00:13:04.539 2978.909 - 2993.804: 99.9034% ( 2) 00:13:04.539 3038.487 - 3053.382: 99.9103% ( 1) 00:13:04.539 3961.949 - 3991.738: 99.9172% ( 1) 00:13:04.539 3991.738 - 4021.527: 99.9724% ( 8) 00:13:04.539 5004.567 - 5034.356: 99.9931% ( 3) 00:13:04.539 5034.356 - 5064.145: 100.0000% ( 1) 00:13:04.539 00:13:04.539 Complete histogram 00:13:04.539 ================== 00:13:04.539 Range in us Cumulative Count 00:13:04.539 1.847 - 1.855: 0.2139% ( 31) 00:13:04.539 1.855 - 1.862: 5.3965% ( 751) 00:13:04.539 1.862 - 1.876: 43.0060% ( 5450) 00:13:04.539 1.876 - 1.891: 60.7411% ( 2570) 00:13:04.539 1.891 - 1.905: 61.9488% ( 175) 00:13:04.539 1.905 - 1.920: 62.4664% ( 75) 00:13:04.539 1.920 - 1.935: 63.4394% ( 141) 00:13:04.540 1.935 - 1.949: 65.3026% ( 270) 00:13:04.540 1.949 - 1.964: 70.8302% ( 801) 00:13:04.540 1.964 - 1.978: 81.8991% ( 1604) 00:13:04.540 1.978 - 1.993: 84.9562% ( 443) 00:13:04.540 1.993 - 2.007: 85.4392% ( 70) 00:13:04.540 2.007 - 2.022: 87.4543% ( 292) 00:13:04.540 2.022 - 2.036: 89.8282% ( 344) 00:13:04.540 2.036 - 2.051: 91.0013% ( 170) 00:13:04.540 2.051 - 2.065: 91.4568% ( 66) 00:13:04.540 2.065 - 2.080: 92.1814% ( 105) 00:13:04.540 2.080 - 2.095: 92.8921% ( 103) 00:13:04.540 2.095 - 2.109: 93.0923% ( 29) 00:13:04.540 2.109 - 2.124: 93.2579% ( 24) 00:13:04.540 2.124 - 2.138: 93.7340% ( 69) 00:13:04.540 2.138 - 2.153: 94.3206% ( 85) 00:13:04.540 2.153 - 2.167: 94.6035% ( 41) 00:13:04.540 2.167 - 2.182: 94.6657% ( 9) 00:13:04.540 2.182 - 2.196: 94.7692% ( 15) 00:13:04.540 2.196 - 2.211: 95.2522% ( 70) 00:13:04.540 2.211 - 2.225: 95.5076% ( 37) 00:13:04.540 2.225 - 2.240: 95.5835% ( 11) 00:13:04.540 2.240 - 2.255: 95.6387% ( 8) 00:13:04.540 2.255 - 2.269: 95.8319% ( 28) 00:13:04.540 2.269 - 2.284: 96.1286% ( 43) 00:13:04.540 2.284 - 2.298: 96.2321% ( 15) 00:13:04.540 2.298 - 2.313: 96.2597% ( 4) 00:13:04.540 2.313 - 2.327: 96.2874% ( 4) 00:13:04.540 2.327 - 2.342: 96.4737% ( 27) 00:13:04.540 2.342 - 2.356: 96.8946% ( 61) 00:13:04.540 2.356 - 2.371: 96.9705% ( 11) 00:13:04.540 2.371 - 2.385: 97.0257% ( 8) 00:13:04.540 2.385 - 2.400: 97.0395% ( 2) 00:13:04.540 2.400 - 2.415: 97.0533% ( 2) 00:13:04.540 2.415 - 2.429: 97.1086% ( 8) 00:13:04.540 2.429 - 2.444: 97.1293% ( 3) 00:13:04.540 2.444 - 2.458: 97.1431% ( 2) 00:13:04.540 2.604 - 2.618: 97.1500% ( 1) 00:13:04.540 2.662 - 2.676: 97.1569% ( 1) 00:13:04.540 2.880 - 2.895: 97.1638% ( 1) 00:13:04.540 3.113 - 3.127: 97.1707% ( 1) 00:13:04.540 3.156 - 3.171: 97.1776% ( 1) 00:13:04.540 3.171 - 3.185: 97.1845% ( 1) 00:13:04.540 3.229 - 3.244: 97.1914% ( 1) 00:13:04.540 3.244 - 3.258: 97.1983% ( 1) 00:13:04.540 3.258 - 3.273: 97.2052% ( 1) 00:13:04.540 3.287 - 3.302: 97.2121% ( 1) 00:13:04.540 3.302 - 3.316: 97.2328% ( 3) 00:13:04.540 3.316 - 3.331: 97.2397% ( 1) 00:13:04.540 3.331 - 3.345: 97.2466% ( 1) 00:13:04.540 3.345 - 3.360: 97.2535% ( 1) 00:13:04.540 3.360 - 3.375: 97.2673% ( 2) 00:13:04.540 3.375 - 3.389: 97.2742% ( 1) 00:13:04.540 3.389 - 3.404: 97.2811% ( 1) 00:13:04.540 3.404 - 3.418: 97.2880% ( 1) 00:13:04.540 3.491 - 3.505: 97.2949% ( 1) 00:13:04.540 3.520 - 3.535: 97.3156% ( 3) 00:13:04.540 3.564 - 3.578: 97.3225% ( 1) 00:13:04.540 3.578 - 3.593: 97.3432% ( 3) 00:13:04.540 3.622 - 3.636: 97.3501% ( 1) 00:13:04.540 3.636 - 3.651: 97.3639% ( 2) 00:13:04.540 3.665 - 3.680: 97.3708% ( 1) 00:13:04.540 3.840 - 3.869: 97.3915% ( 3) 00:13:04.540 3.927 - 3.956: 97.3984% ( 1) 00:13:04.540 3.985 - 4.015: 97.4053% ( 1) 00:13:04.540 4.015 - 4.044: 97.4191% ( 2) 00:13:04.540 4.044 - 4.073: 97.4260% ( 1) 00:13:04.540 4.073 - 4.102: 97.4398% ( 2) 00:13:04.540 4.102 - 4.131: 97.4467% ( 1) 00:13:04.540 4.131 - 4.160: 97.4536% ( 1) 00:13:04.540 4.218 - 4.247: 97.4605% ( 1) 00:13:04.540 4.800 - 4.829: 97.4674% ( 1) 00:13:04.540 5.673 - 5.702: 97.4743% ( 1) 00:13:04.540 6.051 - 6.080: 97.4812% ( 1) 00:13:04.540 6.138 - 6.167: 97.5019% ( 3) 00:13:04.540 6.225 - 6.255: 97.5088% ( 1) 00:13:04.540 6.284 - 6.313: 97.5157% ( 1) 00:13:04.540 6.342 - 6.371: 97.5295% ( 2) 00:13:04.540 6.371 - 6.400: 97.5433% ( 2) 00:13:04.540 6.400 - 6.429: 97.5502% ( 1) 00:13:04.540 6.458 - 6.487: 97.5571% ( 1) 00:13:04.540 6.487 - 6.516: 97.5640% ( 1) 00:13:04.540 6.575 - 6.604: 97.5778% ( 2) 00:13:04.540 6.633 - 6.662: 97.5847% ( 1) 00:13:04.540 6.662 - 6.691: 97.5916% ( 1) 00:13:04.540 6.778 - 6.807: 97.6054% ( 2) 00:13:04.540 6.953 - 6.982: 97.6123% ( 1) 00:13:04.540 7.011 - 7.040: 97.6192% ( 1) 00:13:04.540 7.040 - 7.069: 97.6261% ( 1) 00:13:04.540 7.069 - 7.098: 97.6399% ( 2) 00:13:04.540 7.185 - 7.215: 97.6468% ( 1) 00:13:04.540 7.273 - 7.302: 97.6537% ( 1) 00:13:04.540 7.360 - 7.389: 97.6606% ( 1) 00:13:04.540 7.447 - 7.505: 97.6813% ( 3) 00:13:04.540 7.505 - 7.564: 97.6882% ( 1) 00:13:04.540 7.564 - 7.622: 97.7020% ( 2) 00:13:04.540 7.622 - 7.680: 97.7158% ( 2) 00:13:04.540 7.680 - 7.738: 97.7227% ( 1) 00:13:04.540 7.796 - 7.855: 97.7296% ( 1) 00:13:04.540 7.855 - 7.913: 97.7365% ( 1) 00:13:04.540 8.204 - 8.262: 97.7434% ( 1) 00:13:04.540 8.262 - 8.320: 97.7503% ( 1) 00:13:04.540 8.320 - 8.378: 97.7572% ( 1) 00:13:04.540 8.611 - 8.669: 97.7641% ( 1) 00:13:04.540 8.785 - 8.844: 97.7779% ( 2) 00:13:04.540 8.844 - 8.902: 97.7848% ( 1) 00:13:04.540 8.960 - 9.018: 97.7917% ( 1) 00:13:04.540 9.018 - 9.076: 97.7986% ( 1) 00:13:04.540 9.600 - 9.658: 97.8055% ( 1) 00:13:04.540 10.356 - 10.415: 97.8124% ( 1) 00:13:04.540 10.647 - 10.705: 97.8193% ( 1) 00:13:04.540 10.938 - 10.996: 97.8262% ( 1) 00:13:04.540 11.287 - 11.345: 97.8331% ( 1) 00:13:04.540 11.404 - 11.462: 97.8400% ( 1) 00:13:04.540 11.520 - 11.578: 97.8469% ( 1) 00:13:04.540 12.858 - 12.916: 97.8538% ( 1) 00:13:04.540 12.975 - 13.033: 97.8607% ( 1) 00:13:04.540 13.033 - 13.091: 97.8745% ( 2) 00:13:04.540 13.091 - 13.149: 97.8814% ( 1) 00:13:04.540 13.149 - 13.207: 97.8952% ( 2) 00:13:04.540 13.207 - 13.265: 97.9090% ( 2) 00:13:04.540 13.265 - 13.324: 97.9159% ( 1) 00:13:04.540 13.324 - 13.382: 97.9228% ( 1) 00:13:04.540 13.382 - 13.440: 97.9297% ( 1) 00:13:04.540 13.615 - 13.673: 97.9367% ( 1) 00:13:04.540 13.673 - 13.731: 97.9574% ( 3) 00:13:04.540 13.847 - 13.905: 97.9643% ( 1) 00:13:04.540 13.905 - 13.964: 97.9712% ( 1) 00:13:04.540 14.545 - 14.604: 97.9781% ( 1) 00:13:04.540 14.662 - 14.720: 97.9850% ( 1) 00:13:04.540 15.360 - 15.476: 97.9919% ( 1) 00:13:04.540 15.825 - 15.942: 97.9988% ( 1) 00:13:04.540 15.942 - 16.058: 98.0402% ( 6) 00:13:04.540 16.058 - 16.175: 98.1299% ( 13) 00:13:04.540 16.175 - 16.291: 98.2748% ( 21) 00:13:04.540 16.291 - 16.407: 98.3576% ( 12) 00:13:04.540 16.407 - 16.524: 98.4266% ( 10) 00:13:04.540 16.524 - 16.640: 98.4404% ( 2) 00:13:04.540 16.640 - 16.756: 98.4887% ( 7) 00:13:04.540 16.756 - 16.873: 98.5370% ( 7) 00:13:04.540 16.873 - 16.989: 98.5922% ( 8) 00:13:04.540 16.989 - 17.105: 98.6681% ( 11) 00:13:04.540 17.105 - 17.222: 98.7440% ( 11) 00:13:04.540 17.222 - 17.338: 98.8614% ( 17) 00:13:04.540 17.338 - 17.455: 98.9649% ( 15) 00:13:04.540 17.455 - 17.571: 99.0546% ( 13) 00:13:04.540 17.571 - 17.687: 99.1512% ( 14) 00:13:04.540 17.687 - 17.804: 99.2133% ( 9) 00:13:04.540 17.804 - 17.920: 99.2478% ( 5) 00:13:04.540 17.920 - 18.036: 99.2616% ( 2) 00:13:04.540 18.036 - 18.153: 99.2754% ( 2) 00:13:04.540 18.153 - 18.269: 99.2961% ( 3) 00:13:04.540 18.269 - 18.385: 99.3168% ( 3) 00:13:04.540 18.385 - 18.502: 99.3237% ( 1) 00:13:04.540 18.618 - 18.735: 99.3306% ( 1) 00:13:04.540 18.851 - 18.967: 99.3375% ( 1) 00:13:04.540 19.549 - 19.665: 99.3444% ( 1) 00:13:04.540 23.505 - 23.622: 99.3513% ( 1) 00:13:04.540 24.902 - 25.018: 99.3582% ( 1) 00:13:04.540 25.135 - 25.251: 99.3651% ( 1) 00:13:04.540 26.880 - 26.996: 99.3720% ( 1) 00:13:04.540 27.811 - 27.927: 99.3789% ( 1) 00:13:04.540 29.789 - 30.022: 99.3858% ( 1) 00:13:04.540 39.331 - 39.564: 99.3927% ( 1) 00:13:04.540 983.040 - 990.487: 99.4065% ( 2) 00:13:04.540 997.935 - 1005.382: 99.4134% ( 1) 00:13:04.540 1012.829 - 1020.276: 99.4479% ( 5) 00:13:04.540 1020.276 - 1027.724: 99.4617% ( 2) 00:13:04.540 1027.724 - 1035.171: 99.4824% ( 3) 00:13:04.540 1042.618 - 1050.065: 99.4893% ( 1) 00:13:04.540 1050.065 - 1057.513: 99.4962% ( 1) 00:13:04.540 1921.396 - 1936.291: 99.5031% ( 1) 00:13:04.540 1951.185 - 1966.080: 99.5169% ( 2) 00:13:04.540 1980.975 - 1995.869: 99.5238% ( 1) 00:13:04.540 1995.869 - 2010.764: 99.5307% ( 1) 00:13:04.540 2010.764 - 2025.658: 99.5652% ( 5) 00:13:04.540 2040.553 - 2055.447: 99.5721% ( 1) 00:13:04.540 2889.542 - 2904.436: 99.5790% ( 1) 00:13:04.540 2993.804 - 3008.698: 99.5929% ( 2) 00:13:04.540 3008.698 - 3023.593: 99.6067% ( 2) 00:13:04.540 3023.593 - 3038.487: 99.6343% ( 4) 00:13:04.540 3038.487 - 3053.382: 99.6619% ( 4) 00:13:04.540 3053.382 - 3068.276: 99.6688% ( 1) 00:13:04.540 3961.949 - 3991.738: 99.6964% ( 4) 00:13:04.540 3991.738 - 4021.527: 99.8275% ( 19) 00:13:04.540 4021.527 - 4051.316: 99.8689% ( 6) 00:13:04.540 4051.316 - 4081.105: 99.8758% ( 1) 00:13:04.540 4081.105 - 4110.895: 99.8896% ( 2) 00:13:04.540 4974.778 - 5004.567: 99.9034% ( 2) 00:13:04.540 5004.567 - 5034.356: 99.9586% ( 8) 00:13:04.541 5034.356 - 5064.145: 99.9655% ( 1) 00:13:04.541 5898.240 - 5928.029: 99.9724% ( 1) 00:13:04.541 5987.607 - 6017.396: 99.9793% ( 1) 00:13:04.541 6017.396 - 6047.185: 99.9931% ( 2) 00:13:04.541 6047.185 - 6076.975: 100.0000% ( 1) 00:13:04.541 00:13:04.541 18:08:02 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:04.541 18:08:02 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.541 18:08:02 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.541 18:08:02 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:04.541 18:08:02 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.800 [ 00:13:04.800 { 00:13:04.800 "allow_any_host": true, 00:13:04.800 "hosts": [], 00:13:04.800 "listen_addresses": [], 00:13:04.800 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.800 "subtype": "Discovery" 00:13:04.800 }, 00:13:04.800 { 00:13:04.800 "allow_any_host": true, 00:13:04.800 "hosts": [], 00:13:04.800 "listen_addresses": [ 00:13:04.800 { 00:13:04.800 "adrfam": "IPv4", 00:13:04.800 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.800 "transport": "VFIOUSER", 00:13:04.800 "trsvcid": "0", 00:13:04.800 "trtype": "VFIOUSER" 00:13:04.800 } 00:13:04.800 ], 00:13:04.800 "max_cntlid": 65519, 00:13:04.800 "max_namespaces": 32, 00:13:04.800 "min_cntlid": 1, 00:13:04.800 "model_number": "SPDK bdev Controller", 00:13:04.800 "namespaces": [ 00:13:04.800 { 00:13:04.800 "bdev_name": "Malloc1", 00:13:04.800 "name": "Malloc1", 00:13:04.800 "nguid": "A112291EFD2B49CC99F9437755D235CC", 00:13:04.800 "nsid": 1, 00:13:04.800 "uuid": "a112291e-fd2b-49cc-99f9-437755d235cc" 00:13:04.800 }, 00:13:04.800 { 00:13:04.800 "bdev_name": "Malloc3", 00:13:04.800 "name": "Malloc3", 00:13:04.800 "nguid": "D0721B99ABF442AE96963461690F8CE0", 00:13:04.800 "nsid": 2, 00:13:04.800 "uuid": "d0721b99-abf4-42ae-9696-3461690f8ce0" 00:13:04.800 } 00:13:04.800 ], 00:13:04.800 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.800 "serial_number": "SPDK1", 00:13:04.800 "subtype": "NVMe" 00:13:04.800 }, 00:13:04.800 { 00:13:04.800 "allow_any_host": true, 00:13:04.800 "hosts": [], 00:13:04.800 "listen_addresses": [ 00:13:04.800 { 00:13:04.800 "adrfam": "IPv4", 00:13:04.800 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.800 "transport": "VFIOUSER", 00:13:04.800 "trsvcid": "0", 00:13:04.800 "trtype": "VFIOUSER" 00:13:04.800 } 00:13:04.800 ], 00:13:04.800 "max_cntlid": 65519, 00:13:04.800 "max_namespaces": 32, 00:13:04.800 "min_cntlid": 1, 00:13:04.800 "model_number": "SPDK bdev Controller", 00:13:04.800 "namespaces": [ 00:13:04.800 { 00:13:04.800 "bdev_name": "Malloc2", 00:13:04.800 "name": "Malloc2", 00:13:04.800 "nguid": "E3BB852B5332463390C5BA95B0C3208B", 00:13:04.800 "nsid": 1, 00:13:04.800 "uuid": "e3bb852b-5332-4633-90c5-ba95b0c3208b" 00:13:04.800 } 00:13:04.800 ], 00:13:04.800 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.800 "serial_number": "SPDK2", 00:13:04.800 "subtype": "NVMe" 00:13:04.800 } 00:13:04.800 ] 00:13:04.800 18:08:02 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:04.800 18:08:02 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69901 00:13:04.800 18:08:02 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:04.800 18:08:02 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:04.800 18:08:02 -- common/autotest_common.sh@1244 -- # local i=0 00:13:04.800 18:08:02 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.800 18:08:02 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:13:04.800 18:08:02 -- common/autotest_common.sh@1247 -- # i=1 00:13:04.800 18:08:02 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:05.060 18:08:02 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:05.060 18:08:02 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:13:05.060 18:08:02 -- common/autotest_common.sh@1247 -- # i=2 00:13:05.060 18:08:02 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:05.060 18:08:02 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:05.060 18:08:02 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:05.060 18:08:02 -- common/autotest_common.sh@1255 -- # return 0 00:13:05.060 18:08:02 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:05.060 18:08:02 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:05.320 Malloc4 00:13:05.320 18:08:03 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:05.579 18:08:03 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:05.579 Asynchronous Event Request test 00:13:05.579 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:05.579 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:05.579 Registering asynchronous event callbacks... 00:13:05.579 Starting namespace attribute notice tests for all controllers... 00:13:05.579 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:05.579 aer_cb - Changed Namespace 00:13:05.579 Cleaning up... 00:13:05.837 [ 00:13:05.837 { 00:13:05.837 "allow_any_host": true, 00:13:05.837 "hosts": [], 00:13:05.837 "listen_addresses": [], 00:13:05.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:05.837 "subtype": "Discovery" 00:13:05.837 }, 00:13:05.837 { 00:13:05.837 "allow_any_host": true, 00:13:05.837 "hosts": [], 00:13:05.837 "listen_addresses": [ 00:13:05.837 { 00:13:05.837 "adrfam": "IPv4", 00:13:05.837 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:05.837 "transport": "VFIOUSER", 00:13:05.837 "trsvcid": "0", 00:13:05.837 "trtype": "VFIOUSER" 00:13:05.837 } 00:13:05.837 ], 00:13:05.837 "max_cntlid": 65519, 00:13:05.837 "max_namespaces": 32, 00:13:05.837 "min_cntlid": 1, 00:13:05.837 "model_number": "SPDK bdev Controller", 00:13:05.837 "namespaces": [ 00:13:05.837 { 00:13:05.837 "bdev_name": "Malloc1", 00:13:05.837 "name": "Malloc1", 00:13:05.837 "nguid": "A112291EFD2B49CC99F9437755D235CC", 00:13:05.837 "nsid": 1, 00:13:05.837 "uuid": "a112291e-fd2b-49cc-99f9-437755d235cc" 00:13:05.837 }, 00:13:05.837 { 00:13:05.837 "bdev_name": "Malloc3", 00:13:05.837 "name": "Malloc3", 00:13:05.837 "nguid": "D0721B99ABF442AE96963461690F8CE0", 00:13:05.837 "nsid": 2, 00:13:05.837 "uuid": "d0721b99-abf4-42ae-9696-3461690f8ce0" 00:13:05.837 } 00:13:05.837 ], 00:13:05.837 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:05.837 "serial_number": "SPDK1", 00:13:05.837 "subtype": "NVMe" 00:13:05.837 }, 00:13:05.837 { 00:13:05.837 "allow_any_host": true, 00:13:05.837 "hosts": [], 00:13:05.837 "listen_addresses": [ 00:13:05.837 { 00:13:05.837 "adrfam": "IPv4", 00:13:05.837 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:05.837 "transport": "VFIOUSER", 00:13:05.837 "trsvcid": "0", 00:13:05.837 "trtype": "VFIOUSER" 00:13:05.837 } 00:13:05.837 ], 00:13:05.837 "max_cntlid": 65519, 00:13:05.837 "max_namespaces": 32, 00:13:05.837 "min_cntlid": 1, 00:13:05.837 "model_number": "SPDK bdev Controller", 00:13:05.837 "namespaces": [ 00:13:05.837 { 00:13:05.837 "bdev_name": "Malloc2", 00:13:05.837 "name": "Malloc2", 00:13:05.837 "nguid": "E3BB852B5332463390C5BA95B0C3208B", 00:13:05.837 "nsid": 1, 00:13:05.837 "uuid": "e3bb852b-5332-4633-90c5-ba95b0c3208b" 00:13:05.837 }, 00:13:05.837 { 00:13:05.837 "bdev_name": "Malloc4", 00:13:05.837 "name": "Malloc4", 00:13:05.837 "nguid": "8139C59F9D4742CAAAF876C6C907FEC5", 00:13:05.837 "nsid": 2, 00:13:05.837 "uuid": "8139c59f-9d47-42ca-aaf8-76c6c907fec5" 00:13:05.837 } 00:13:05.837 ], 00:13:05.837 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:05.837 "serial_number": "SPDK2", 00:13:05.837 "subtype": "NVMe" 00:13:05.837 } 00:13:05.837 ] 00:13:05.837 18:08:03 -- target/nvmf_vfio_user.sh@44 -- # wait 69901 00:13:05.837 18:08:03 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:05.838 18:08:03 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69223 00:13:05.838 18:08:03 -- common/autotest_common.sh@926 -- # '[' -z 69223 ']' 00:13:05.838 18:08:03 -- common/autotest_common.sh@930 -- # kill -0 69223 00:13:05.838 18:08:03 -- common/autotest_common.sh@931 -- # uname 00:13:05.838 18:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:05.838 18:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69223 00:13:05.838 killing process with pid 69223 00:13:05.838 18:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:05.838 18:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:05.838 18:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69223' 00:13:05.838 18:08:03 -- common/autotest_common.sh@945 -- # kill 69223 00:13:05.838 [2024-04-25 18:08:03.769675] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:05.838 18:08:03 -- common/autotest_common.sh@950 -- # wait 69223 00:13:06.404 18:08:04 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:06.404 18:08:04 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69950 00:13:06.405 Process pid: 69950 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69950' 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:06.405 18:08:04 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69950 00:13:06.405 18:08:04 -- common/autotest_common.sh@819 -- # '[' -z 69950 ']' 00:13:06.405 18:08:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.405 18:08:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.664 18:08:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.664 18:08:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.664 18:08:04 -- common/autotest_common.sh@10 -- # set +x 00:13:06.664 [2024-04-25 18:08:04.394444] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:06.664 [2024-04-25 18:08:04.395756] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:06.664 [2024-04-25 18:08:04.395841] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.664 [2024-04-25 18:08:04.529504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.923 [2024-04-25 18:08:04.674525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.923 [2024-04-25 18:08:04.674685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.923 [2024-04-25 18:08:04.674698] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.923 [2024-04-25 18:08:04.674706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.923 [2024-04-25 18:08:04.674887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.923 [2024-04-25 18:08:04.675981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.923 [2024-04-25 18:08:04.676157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.923 [2024-04-25 18:08:04.676166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.923 [2024-04-25 18:08:04.800451] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:06.923 [2024-04-25 18:08:04.813430] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:06.923 [2024-04-25 18:08:04.813634] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:06.923 [2024-04-25 18:08:04.814485] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:06.923 [2024-04-25 18:08:04.814649] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:07.490 18:08:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.490 18:08:05 -- common/autotest_common.sh@852 -- # return 0 00:13:07.490 18:08:05 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:08.864 18:08:06 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:09.123 Malloc1 00:13:09.123 18:08:06 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:09.382 18:08:07 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:09.641 18:08:07 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:09.900 18:08:07 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:09.900 18:08:07 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:09.900 18:08:07 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.468 Malloc2 00:13:10.468 18:08:08 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:10.468 18:08:08 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:10.727 18:08:08 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:10.986 18:08:08 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:10.986 18:08:08 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69950 00:13:10.986 18:08:08 -- common/autotest_common.sh@926 -- # '[' -z 69950 ']' 00:13:10.986 18:08:08 -- common/autotest_common.sh@930 -- # kill -0 69950 00:13:10.986 18:08:08 -- common/autotest_common.sh@931 -- # uname 00:13:10.986 18:08:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.986 18:08:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69950 00:13:10.986 18:08:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:10.986 18:08:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:10.986 18:08:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69950' 00:13:10.986 killing process with pid 69950 00:13:10.986 18:08:08 -- common/autotest_common.sh@945 -- # kill 69950 00:13:10.986 18:08:08 -- common/autotest_common.sh@950 -- # wait 69950 00:13:11.553 18:08:09 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:11.553 18:08:09 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.553 ************************************ 00:13:11.553 END TEST nvmf_vfio_user 00:13:11.553 ************************************ 00:13:11.553 00:13:11.553 real 0m55.437s 00:13:11.554 user 3m37.127s 00:13:11.554 sys 0m3.989s 00:13:11.554 18:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.554 18:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:11.554 18:08:09 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:11.554 18:08:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:11.554 18:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.554 18:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:11.554 ************************************ 00:13:11.554 START TEST nvmf_vfio_user_nvme_compliance 00:13:11.554 ************************************ 00:13:11.554 18:08:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:11.554 * Looking for test storage... 00:13:11.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:11.554 18:08:09 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.554 18:08:09 -- nvmf/common.sh@7 -- # uname -s 00:13:11.554 18:08:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.554 18:08:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.554 18:08:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.554 18:08:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.554 18:08:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.554 18:08:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.554 18:08:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.554 18:08:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.554 18:08:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.554 18:08:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.554 18:08:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:11.554 18:08:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:11.554 18:08:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.554 18:08:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.554 18:08:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.554 18:08:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.554 18:08:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.554 18:08:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.554 18:08:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.554 18:08:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.554 18:08:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.554 18:08:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.554 18:08:09 -- paths/export.sh@5 -- # export PATH 00:13:11.554 18:08:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.554 18:08:09 -- nvmf/common.sh@46 -- # : 0 00:13:11.554 18:08:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:11.554 18:08:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:11.554 18:08:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:11.554 18:08:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.554 18:08:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.554 18:08:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:11.554 18:08:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:11.554 18:08:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:11.554 18:08:09 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.554 18:08:09 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.554 18:08:09 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:11.554 18:08:09 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:11.554 18:08:09 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:11.554 Process pid: 70140 00:13:11.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.554 18:08:09 -- compliance/compliance.sh@20 -- # nvmfpid=70140 00:13:11.554 18:08:09 -- compliance/compliance.sh@21 -- # echo 'Process pid: 70140' 00:13:11.554 18:08:09 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:11.554 18:08:09 -- compliance/compliance.sh@24 -- # waitforlisten 70140 00:13:11.554 18:08:09 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:11.554 18:08:09 -- common/autotest_common.sh@819 -- # '[' -z 70140 ']' 00:13:11.554 18:08:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.554 18:08:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:11.554 18:08:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.554 18:08:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:11.554 18:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:11.554 [2024-04-25 18:08:09.440137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:11.554 [2024-04-25 18:08:09.441192] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.813 [2024-04-25 18:08:09.576100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.813 [2024-04-25 18:08:09.669783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.813 [2024-04-25 18:08:09.670191] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.813 [2024-04-25 18:08:09.670242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.813 [2024-04-25 18:08:09.670396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.813 [2024-04-25 18:08:09.670819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.813 [2024-04-25 18:08:09.670934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.813 [2024-04-25 18:08:09.670938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.749 18:08:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:12.749 18:08:10 -- common/autotest_common.sh@852 -- # return 0 00:13:12.749 18:08:10 -- compliance/compliance.sh@26 -- # sleep 1 00:13:13.686 18:08:11 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:13.686 18:08:11 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:13.686 18:08:11 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:13.686 18:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.686 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.686 18:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.686 18:08:11 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:13.686 18:08:11 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:13.686 18:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.686 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.686 malloc0 00:13:13.686 18:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.686 18:08:11 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:13.686 18:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.686 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.686 18:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.686 18:08:11 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:13.686 18:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.686 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.686 18:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.686 18:08:11 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:13.686 18:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.686 18:08:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.686 18:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.686 18:08:11 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:13.945 00:13:13.945 00:13:13.945 CUnit - A unit testing framework for C - Version 2.1-3 00:13:13.945 http://cunit.sourceforge.net/ 00:13:13.945 00:13:13.945 00:13:13.945 Suite: nvme_compliance 00:13:13.945 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-25 18:08:11.802671] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:13.945 [2024-04-25 18:08:11.802753] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:13.945 [2024-04-25 18:08:11.802765] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:13.945 passed 00:13:14.204 Test: admin_identify_ctrlr_verify_fused ...passed 00:13:14.204 Test: admin_identify_ns ...[2024-04-25 18:08:12.058438] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:14.204 [2024-04-25 18:08:12.066443] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:14.204 passed 00:13:14.463 Test: admin_get_features_mandatory_features ...passed 00:13:14.463 Test: admin_get_features_optional_features ...passed 00:13:14.721 Test: admin_set_features_number_of_queues ...passed 00:13:14.721 Test: admin_get_log_page_mandatory_logs ...passed 00:13:14.978 Test: admin_get_log_page_with_lpo ...[2024-04-25 18:08:12.690310] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:14.978 passed 00:13:14.978 Test: fabric_property_get ...passed 00:13:14.978 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-25 18:08:12.901742] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:15.236 passed 00:13:15.236 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-25 18:08:13.082302] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:15.236 [2024-04-25 18:08:13.098292] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:15.236 passed 00:13:15.494 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-25 18:08:13.202673] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:15.494 passed 00:13:15.494 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-25 18:08:13.374299] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:15.494 [2024-04-25 18:08:13.398297] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:15.750 passed 00:13:15.750 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-25 18:08:13.493681] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:15.750 [2024-04-25 18:08:13.493785] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:15.750 passed 00:13:15.750 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-25 18:08:13.682320] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:16.006 [2024-04-25 18:08:13.690294] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:16.006 [2024-04-25 18:08:13.698297] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:16.006 [2024-04-25 18:08:13.706289] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:16.006 passed 00:13:16.006 Test: admin_create_io_sq_verify_pc ...[2024-04-25 18:08:13.846312] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:16.006 passed 00:13:17.376 Test: admin_create_io_qp_max_qps ...[2024-04-25 18:08:15.066295] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:17.633 passed 00:13:17.891 Test: admin_create_io_sq_shared_cq ...[2024-04-25 18:08:15.673326] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:17.891 passed 00:13:17.891 00:13:17.891 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.891 suites 1 1 n/a 0 0 00:13:17.891 tests 18 18 18 0 0 00:13:17.891 asserts 360 360 360 0 n/a 00:13:17.891 00:13:17.891 Elapsed time = 1.627 seconds 00:13:17.891 18:08:15 -- compliance/compliance.sh@42 -- # killprocess 70140 00:13:17.891 18:08:15 -- common/autotest_common.sh@926 -- # '[' -z 70140 ']' 00:13:17.891 18:08:15 -- common/autotest_common.sh@930 -- # kill -0 70140 00:13:17.891 18:08:15 -- common/autotest_common.sh@931 -- # uname 00:13:17.891 18:08:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:17.891 18:08:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70140 00:13:17.891 killing process with pid 70140 00:13:17.891 18:08:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:17.891 18:08:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:17.891 18:08:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70140' 00:13:17.891 18:08:15 -- common/autotest_common.sh@945 -- # kill 70140 00:13:17.891 18:08:15 -- common/autotest_common.sh@950 -- # wait 70140 00:13:18.149 18:08:16 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:18.149 18:08:16 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:18.149 00:13:18.149 real 0m6.775s 00:13:18.149 user 0m19.075s 00:13:18.149 sys 0m0.569s 00:13:18.149 18:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.149 ************************************ 00:13:18.149 END TEST nvmf_vfio_user_nvme_compliance 00:13:18.149 ************************************ 00:13:18.149 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:13:18.408 18:08:16 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:18.408 18:08:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:18.408 18:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.408 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:13:18.408 ************************************ 00:13:18.408 START TEST nvmf_vfio_user_fuzz 00:13:18.408 ************************************ 00:13:18.408 18:08:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:18.408 * Looking for test storage... 00:13:18.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.408 18:08:16 -- nvmf/common.sh@7 -- # uname -s 00:13:18.408 18:08:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.408 18:08:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.408 18:08:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.408 18:08:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.408 18:08:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.408 18:08:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.408 18:08:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.408 18:08:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.408 18:08:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.408 18:08:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.408 18:08:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:18.408 18:08:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:18.408 18:08:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.408 18:08:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.408 18:08:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.408 18:08:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.408 18:08:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.408 18:08:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.408 18:08:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.408 18:08:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.408 18:08:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.408 18:08:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.408 18:08:16 -- paths/export.sh@5 -- # export PATH 00:13:18.408 18:08:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.408 18:08:16 -- nvmf/common.sh@46 -- # : 0 00:13:18.408 18:08:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:18.408 18:08:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:18.408 18:08:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:18.408 18:08:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.408 18:08:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.408 18:08:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:18.408 18:08:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:18.408 18:08:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:18.408 Process pid: 70292 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=70292 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 70292' 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 70292 00:13:18.408 18:08:16 -- common/autotest_common.sh@819 -- # '[' -z 70292 ']' 00:13:18.408 18:08:16 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:18.408 18:08:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.408 18:08:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:18.408 18:08:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.408 18:08:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:18.408 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.345 18:08:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:19.345 18:08:17 -- common/autotest_common.sh@852 -- # return 0 00:13:19.345 18:08:17 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:20.283 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.283 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:20.283 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.283 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 malloc0 00:13:20.283 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:20.283 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.283 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:20.283 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.283 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.283 18:08:18 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:20.283 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.283 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.541 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.541 18:08:18 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:20.541 18:08:18 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:20.800 Shutting down the fuzz application 00:13:20.800 18:08:18 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:20.800 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.800 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.800 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.800 18:08:18 -- target/vfio_user_fuzz.sh@46 -- # killprocess 70292 00:13:20.800 18:08:18 -- common/autotest_common.sh@926 -- # '[' -z 70292 ']' 00:13:20.800 18:08:18 -- common/autotest_common.sh@930 -- # kill -0 70292 00:13:20.800 18:08:18 -- common/autotest_common.sh@931 -- # uname 00:13:20.800 18:08:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:20.800 18:08:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70292 00:13:20.800 killing process with pid 70292 00:13:20.800 18:08:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:20.800 18:08:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:20.800 18:08:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70292' 00:13:20.800 18:08:18 -- common/autotest_common.sh@945 -- # kill 70292 00:13:20.800 18:08:18 -- common/autotest_common.sh@950 -- # wait 70292 00:13:21.059 18:08:18 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:21.060 18:08:18 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:21.060 00:13:21.060 real 0m2.837s 00:13:21.060 user 0m3.099s 00:13:21.060 sys 0m0.377s 00:13:21.060 ************************************ 00:13:21.060 END TEST nvmf_vfio_user_fuzz 00:13:21.060 ************************************ 00:13:21.060 18:08:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.060 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:21.060 18:08:18 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:21.060 18:08:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:21.060 18:08:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:21.060 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:21.060 ************************************ 00:13:21.060 START TEST nvmf_host_management 00:13:21.060 ************************************ 00:13:21.060 18:08:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:21.319 * Looking for test storage... 00:13:21.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:21.319 18:08:19 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.319 18:08:19 -- nvmf/common.sh@7 -- # uname -s 00:13:21.319 18:08:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.319 18:08:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.319 18:08:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.319 18:08:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.319 18:08:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.319 18:08:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.319 18:08:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.319 18:08:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.319 18:08:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.319 18:08:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:21.319 18:08:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:21.319 18:08:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.319 18:08:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.319 18:08:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:21.319 18:08:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.319 18:08:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.319 18:08:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.319 18:08:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.319 18:08:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.319 18:08:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.319 18:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.319 18:08:19 -- paths/export.sh@5 -- # export PATH 00:13:21.319 18:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.319 18:08:19 -- nvmf/common.sh@46 -- # : 0 00:13:21.319 18:08:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:21.319 18:08:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:21.319 18:08:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:21.319 18:08:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.319 18:08:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.319 18:08:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:21.319 18:08:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:21.319 18:08:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:21.319 18:08:19 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.319 18:08:19 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:21.319 18:08:19 -- target/host_management.sh@104 -- # nvmftestinit 00:13:21.319 18:08:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:21.319 18:08:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.319 18:08:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:21.319 18:08:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:21.319 18:08:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:21.319 18:08:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.319 18:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.319 18:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.319 18:08:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:21.319 18:08:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:21.319 18:08:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.319 18:08:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.319 18:08:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:21.319 18:08:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:21.319 18:08:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:21.319 18:08:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:21.319 18:08:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:21.319 18:08:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.319 18:08:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:21.319 18:08:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:21.319 18:08:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:21.319 18:08:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:21.319 18:08:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:21.320 18:08:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:21.320 Cannot find device "nvmf_tgt_br" 00:13:21.320 18:08:19 -- nvmf/common.sh@154 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.320 Cannot find device "nvmf_tgt_br2" 00:13:21.320 18:08:19 -- nvmf/common.sh@155 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:21.320 18:08:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:21.320 Cannot find device "nvmf_tgt_br" 00:13:21.320 18:08:19 -- nvmf/common.sh@157 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:21.320 Cannot find device "nvmf_tgt_br2" 00:13:21.320 18:08:19 -- nvmf/common.sh@158 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:21.320 18:08:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:21.320 18:08:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.320 18:08:19 -- nvmf/common.sh@161 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.320 18:08:19 -- nvmf/common.sh@162 -- # true 00:13:21.320 18:08:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:21.320 18:08:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:21.320 18:08:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:21.320 18:08:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:21.320 18:08:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:21.320 18:08:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:21.579 18:08:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:21.579 18:08:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:21.579 18:08:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:21.579 18:08:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:21.579 18:08:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:21.579 18:08:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:21.579 18:08:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:21.579 18:08:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:21.579 18:08:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:21.579 18:08:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:21.579 18:08:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:21.579 18:08:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:21.579 18:08:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:21.579 18:08:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:21.579 18:08:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:21.579 18:08:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:21.579 18:08:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:21.579 18:08:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:21.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:13:21.579 00:13:21.579 --- 10.0.0.2 ping statistics --- 00:13:21.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.579 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:21.579 18:08:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:21.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:21.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:21.579 00:13:21.579 --- 10.0.0.3 ping statistics --- 00:13:21.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.579 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:21.579 18:08:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:21.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:21.579 00:13:21.579 --- 10.0.0.1 ping statistics --- 00:13:21.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.579 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:21.579 18:08:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.579 18:08:19 -- nvmf/common.sh@421 -- # return 0 00:13:21.579 18:08:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:21.579 18:08:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.579 18:08:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:21.579 18:08:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:21.579 18:08:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.579 18:08:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:21.579 18:08:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:21.579 18:08:19 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:21.579 18:08:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:21.579 18:08:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:21.579 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:21.579 ************************************ 00:13:21.579 START TEST nvmf_host_management 00:13:21.580 ************************************ 00:13:21.580 18:08:19 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:21.580 18:08:19 -- target/host_management.sh@69 -- # starttarget 00:13:21.580 18:08:19 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:21.580 18:08:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:21.580 18:08:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:21.580 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:21.580 18:08:19 -- nvmf/common.sh@469 -- # nvmfpid=70526 00:13:21.580 18:08:19 -- nvmf/common.sh@470 -- # waitforlisten 70526 00:13:21.580 18:08:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:21.580 18:08:19 -- common/autotest_common.sh@819 -- # '[' -z 70526 ']' 00:13:21.580 18:08:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.580 18:08:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:21.580 18:08:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.580 18:08:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:21.580 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:21.580 [2024-04-25 18:08:19.498039] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:21.580 [2024-04-25 18:08:19.498125] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.838 [2024-04-25 18:08:19.631915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.839 [2024-04-25 18:08:19.762071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:21.839 [2024-04-25 18:08:19.762637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.839 [2024-04-25 18:08:19.762675] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.839 [2024-04-25 18:08:19.762692] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.839 [2024-04-25 18:08:19.762970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.839 [2024-04-25 18:08:19.763118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:21.839 [2024-04-25 18:08:19.763140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.839 [2024-04-25 18:08:19.762817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.774 18:08:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:22.774 18:08:20 -- common/autotest_common.sh@852 -- # return 0 00:13:22.774 18:08:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:22.774 18:08:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:22.774 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.774 18:08:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.774 18:08:20 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.774 18:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.774 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.774 [2024-04-25 18:08:20.461086] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.774 18:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.775 18:08:20 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:22.775 18:08:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:22.775 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.775 18:08:20 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:22.775 18:08:20 -- target/host_management.sh@23 -- # cat 00:13:22.775 18:08:20 -- target/host_management.sh@30 -- # rpc_cmd 00:13:22.775 18:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.775 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.775 Malloc0 00:13:22.775 [2024-04-25 18:08:20.554225] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.775 18:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.775 18:08:20 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:22.775 18:08:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:22.775 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.775 18:08:20 -- target/host_management.sh@73 -- # perfpid=70598 00:13:22.775 18:08:20 -- target/host_management.sh@74 -- # waitforlisten 70598 /var/tmp/bdevperf.sock 00:13:22.775 18:08:20 -- common/autotest_common.sh@819 -- # '[' -z 70598 ']' 00:13:22.775 18:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.775 18:08:20 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:22.775 18:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:22.775 18:08:20 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:22.775 18:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.775 18:08:20 -- nvmf/common.sh@520 -- # config=() 00:13:22.775 18:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:22.775 18:08:20 -- nvmf/common.sh@520 -- # local subsystem config 00:13:22.775 18:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:22.775 18:08:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:22.775 18:08:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:22.775 { 00:13:22.775 "params": { 00:13:22.775 "name": "Nvme$subsystem", 00:13:22.775 "trtype": "$TEST_TRANSPORT", 00:13:22.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.775 "adrfam": "ipv4", 00:13:22.775 "trsvcid": "$NVMF_PORT", 00:13:22.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.775 "hdgst": ${hdgst:-false}, 00:13:22.775 "ddgst": ${ddgst:-false} 00:13:22.775 }, 00:13:22.775 "method": "bdev_nvme_attach_controller" 00:13:22.775 } 00:13:22.775 EOF 00:13:22.775 )") 00:13:22.775 18:08:20 -- nvmf/common.sh@542 -- # cat 00:13:22.775 18:08:20 -- nvmf/common.sh@544 -- # jq . 00:13:22.775 18:08:20 -- nvmf/common.sh@545 -- # IFS=, 00:13:22.775 18:08:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:22.775 "params": { 00:13:22.775 "name": "Nvme0", 00:13:22.775 "trtype": "tcp", 00:13:22.775 "traddr": "10.0.0.2", 00:13:22.775 "adrfam": "ipv4", 00:13:22.775 "trsvcid": "4420", 00:13:22.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:22.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:22.775 "hdgst": false, 00:13:22.775 "ddgst": false 00:13:22.775 }, 00:13:22.775 "method": "bdev_nvme_attach_controller" 00:13:22.775 }' 00:13:22.775 [2024-04-25 18:08:20.657449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:22.775 [2024-04-25 18:08:20.657548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70598 ] 00:13:23.055 [2024-04-25 18:08:20.800183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.055 [2024-04-25 18:08:20.914113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.352 Running I/O for 10 seconds... 00:13:23.923 18:08:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.923 18:08:21 -- common/autotest_common.sh@852 -- # return 0 00:13:23.923 18:08:21 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:23.923 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.923 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:23.923 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.923 18:08:21 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:23.923 18:08:21 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:23.923 18:08:21 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:23.923 18:08:21 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:23.923 18:08:21 -- target/host_management.sh@52 -- # local ret=1 00:13:23.923 18:08:21 -- target/host_management.sh@53 -- # local i 00:13:23.923 18:08:21 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:23.923 18:08:21 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:23.923 18:08:21 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:23.923 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.923 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:23.923 18:08:21 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:23.923 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.923 18:08:21 -- target/host_management.sh@55 -- # read_io_count=2151 00:13:23.923 18:08:21 -- target/host_management.sh@58 -- # '[' 2151 -ge 100 ']' 00:13:23.923 18:08:21 -- target/host_management.sh@59 -- # ret=0 00:13:23.923 18:08:21 -- target/host_management.sh@60 -- # break 00:13:23.923 18:08:21 -- target/host_management.sh@64 -- # return 0 00:13:23.923 18:08:21 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:23.923 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.923 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:23.923 [2024-04-25 18:08:21.788721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.788999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.789007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.923 [2024-04-25 18:08:21.789015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d0b0 is same with the state(5) to be set 00:13:23.924 [2024-04-25 18:08:21.789714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.789987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.789997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.924 [2024-04-25 18:08:21.790517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.924 [2024-04-25 18:08:21.790526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.790980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.790989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:23.925 [2024-04-25 18:08:21.791209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.925 [2024-04-25 18:08:21.791312] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ce7d0 was disconnected and freed. reset controller. 00:13:23.925 [2024-04-25 18:08:21.792513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:23.925 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.925 18:08:21 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:23.925 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.925 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:23.925 task offset: 38400 on job bdev=Nvme0n1 fails 00:13:23.925 00:13:23.925 Latency(us) 00:13:23.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.925 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:23.925 Job: Nvme0n1 ended in about 0.70 seconds with error 00:13:23.925 Verification LBA range: start 0x0 length 0x400 00:13:23.925 Nvme0n1 : 0.70 3340.86 208.80 91.96 0.00 18344.58 2010.76 25976.09 00:13:23.925 =================================================================================================================== 00:13:23.925 Total : 3340.86 208.80 91.96 0.00 18344.58 2010.76 25976.09 00:13:23.925 [2024-04-25 18:08:21.794896] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.925 [2024-04-25 18:08:21.794928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce170 (9): Bad file descriptor 00:13:23.925 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.925 18:08:21 -- target/host_management.sh@87 -- # sleep 1 00:13:23.925 [2024-04-25 18:08:21.810130] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:25.310 18:08:22 -- target/host_management.sh@91 -- # kill -9 70598 00:13:25.310 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70598) - No such process 00:13:25.310 18:08:22 -- target/host_management.sh@91 -- # true 00:13:25.310 18:08:22 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:25.310 18:08:22 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:25.310 18:08:22 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:25.310 18:08:22 -- nvmf/common.sh@520 -- # config=() 00:13:25.310 18:08:22 -- nvmf/common.sh@520 -- # local subsystem config 00:13:25.310 18:08:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:25.310 18:08:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:25.310 { 00:13:25.310 "params": { 00:13:25.310 "name": "Nvme$subsystem", 00:13:25.310 "trtype": "$TEST_TRANSPORT", 00:13:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:25.310 "adrfam": "ipv4", 00:13:25.310 "trsvcid": "$NVMF_PORT", 00:13:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:25.310 "hdgst": ${hdgst:-false}, 00:13:25.310 "ddgst": ${ddgst:-false} 00:13:25.310 }, 00:13:25.310 "method": "bdev_nvme_attach_controller" 00:13:25.310 } 00:13:25.310 EOF 00:13:25.310 )") 00:13:25.310 18:08:22 -- nvmf/common.sh@542 -- # cat 00:13:25.310 18:08:22 -- nvmf/common.sh@544 -- # jq . 00:13:25.310 18:08:22 -- nvmf/common.sh@545 -- # IFS=, 00:13:25.310 18:08:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:25.310 "params": { 00:13:25.310 "name": "Nvme0", 00:13:25.310 "trtype": "tcp", 00:13:25.310 "traddr": "10.0.0.2", 00:13:25.310 "adrfam": "ipv4", 00:13:25.310 "trsvcid": "4420", 00:13:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:25.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:25.310 "hdgst": false, 00:13:25.310 "ddgst": false 00:13:25.310 }, 00:13:25.310 "method": "bdev_nvme_attach_controller" 00:13:25.310 }' 00:13:25.310 [2024-04-25 18:08:22.855294] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:25.310 [2024-04-25 18:08:22.855402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:13:25.310 [2024-04-25 18:08:22.987140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.310 [2024-04-25 18:08:23.106619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.567 Running I/O for 1 seconds... 00:13:26.504 00:13:26.504 Latency(us) 00:13:26.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:26.504 Verification LBA range: start 0x0 length 0x400 00:13:26.504 Nvme0n1 : 1.01 3267.72 204.23 0.00 0.00 19242.45 1310.72 26214.40 00:13:26.504 =================================================================================================================== 00:13:26.504 Total : 3267.72 204.23 0.00 0.00 19242.45 1310.72 26214.40 00:13:26.763 18:08:24 -- target/host_management.sh@101 -- # stoptarget 00:13:26.763 18:08:24 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:26.763 18:08:24 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:26.763 18:08:24 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:26.763 18:08:24 -- target/host_management.sh@40 -- # nvmftestfini 00:13:26.763 18:08:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:26.763 18:08:24 -- nvmf/common.sh@116 -- # sync 00:13:26.763 18:08:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:26.763 18:08:24 -- nvmf/common.sh@119 -- # set +e 00:13:26.763 18:08:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:26.763 18:08:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:26.763 rmmod nvme_tcp 00:13:26.763 rmmod nvme_fabrics 00:13:26.763 rmmod nvme_keyring 00:13:26.763 18:08:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:26.763 18:08:24 -- nvmf/common.sh@123 -- # set -e 00:13:26.763 18:08:24 -- nvmf/common.sh@124 -- # return 0 00:13:26.763 18:08:24 -- nvmf/common.sh@477 -- # '[' -n 70526 ']' 00:13:26.763 18:08:24 -- nvmf/common.sh@478 -- # killprocess 70526 00:13:26.763 18:08:24 -- common/autotest_common.sh@926 -- # '[' -z 70526 ']' 00:13:26.763 18:08:24 -- common/autotest_common.sh@930 -- # kill -0 70526 00:13:26.763 18:08:24 -- common/autotest_common.sh@931 -- # uname 00:13:26.763 18:08:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.763 18:08:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70526 00:13:26.763 18:08:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:26.763 18:08:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:26.763 killing process with pid 70526 00:13:26.763 18:08:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70526' 00:13:26.763 18:08:24 -- common/autotest_common.sh@945 -- # kill 70526 00:13:26.763 18:08:24 -- common/autotest_common.sh@950 -- # wait 70526 00:13:27.331 [2024-04-25 18:08:25.066833] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:27.331 18:08:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.331 18:08:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.331 18:08:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.331 18:08:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.331 18:08:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.331 18:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.331 18:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.331 18:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.331 18:08:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:27.331 00:13:27.331 real 0m5.711s 00:13:27.331 user 0m23.583s 00:13:27.331 sys 0m1.342s 00:13:27.331 18:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.331 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.331 ************************************ 00:13:27.331 END TEST nvmf_host_management 00:13:27.331 ************************************ 00:13:27.331 18:08:25 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:27.331 00:13:27.331 real 0m6.204s 00:13:27.331 user 0m23.700s 00:13:27.331 sys 0m1.578s 00:13:27.331 18:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.331 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.331 ************************************ 00:13:27.331 END TEST nvmf_host_management 00:13:27.331 ************************************ 00:13:27.331 18:08:25 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:27.331 18:08:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:27.331 18:08:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.331 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.331 ************************************ 00:13:27.331 START TEST nvmf_lvol 00:13:27.331 ************************************ 00:13:27.331 18:08:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:27.590 * Looking for test storage... 00:13:27.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.590 18:08:25 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.590 18:08:25 -- nvmf/common.sh@7 -- # uname -s 00:13:27.590 18:08:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.590 18:08:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.590 18:08:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.590 18:08:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.590 18:08:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.591 18:08:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.591 18:08:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.591 18:08:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.591 18:08:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.591 18:08:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.591 18:08:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:27.591 18:08:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:27.591 18:08:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.591 18:08:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.591 18:08:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.591 18:08:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.591 18:08:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.591 18:08:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.591 18:08:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.591 18:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.591 18:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.591 18:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.591 18:08:25 -- paths/export.sh@5 -- # export PATH 00:13:27.591 18:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.591 18:08:25 -- nvmf/common.sh@46 -- # : 0 00:13:27.591 18:08:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:27.591 18:08:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:27.591 18:08:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:27.591 18:08:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.591 18:08:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.591 18:08:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:27.591 18:08:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:27.591 18:08:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.591 18:08:25 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:27.591 18:08:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:27.591 18:08:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.591 18:08:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:27.591 18:08:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:27.591 18:08:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:27.591 18:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.591 18:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.591 18:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.591 18:08:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:27.591 18:08:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:27.591 18:08:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:27.591 18:08:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:27.591 18:08:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:27.591 18:08:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:27.591 18:08:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.591 18:08:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.591 18:08:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.591 18:08:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:27.591 18:08:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.591 18:08:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.591 18:08:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.591 18:08:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.591 18:08:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.591 18:08:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.591 18:08:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.591 18:08:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.591 18:08:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:27.591 18:08:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:27.591 Cannot find device "nvmf_tgt_br" 00:13:27.591 18:08:25 -- nvmf/common.sh@154 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.591 Cannot find device "nvmf_tgt_br2" 00:13:27.591 18:08:25 -- nvmf/common.sh@155 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:27.591 18:08:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:27.591 Cannot find device "nvmf_tgt_br" 00:13:27.591 18:08:25 -- nvmf/common.sh@157 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:27.591 Cannot find device "nvmf_tgt_br2" 00:13:27.591 18:08:25 -- nvmf/common.sh@158 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:27.591 18:08:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:27.591 18:08:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.591 18:08:25 -- nvmf/common.sh@161 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.591 18:08:25 -- nvmf/common.sh@162 -- # true 00:13:27.591 18:08:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.591 18:08:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.591 18:08:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.591 18:08:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.591 18:08:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.591 18:08:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.850 18:08:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.850 18:08:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.850 18:08:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.850 18:08:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:27.850 18:08:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:27.850 18:08:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:27.850 18:08:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:27.850 18:08:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.850 18:08:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.850 18:08:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.850 18:08:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:27.850 18:08:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:27.850 18:08:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.850 18:08:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.850 18:08:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.850 18:08:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.850 18:08:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.850 18:08:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:27.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:27.850 00:13:27.850 --- 10.0.0.2 ping statistics --- 00:13:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.850 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:27.850 18:08:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:27.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:13:27.850 00:13:27.850 --- 10.0.0.3 ping statistics --- 00:13:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.850 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:27.850 18:08:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:13:27.850 00:13:27.850 --- 10.0.0.1 ping statistics --- 00:13:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.850 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:27.850 18:08:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.850 18:08:25 -- nvmf/common.sh@421 -- # return 0 00:13:27.850 18:08:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:27.850 18:08:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.850 18:08:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:27.850 18:08:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:27.850 18:08:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.850 18:08:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:27.850 18:08:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:27.850 18:08:25 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:27.850 18:08:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:27.850 18:08:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:27.850 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 18:08:25 -- nvmf/common.sh@469 -- # nvmfpid=70883 00:13:27.850 18:08:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:27.850 18:08:25 -- nvmf/common.sh@470 -- # waitforlisten 70883 00:13:27.850 18:08:25 -- common/autotest_common.sh@819 -- # '[' -z 70883 ']' 00:13:27.850 18:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.850 18:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:27.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.850 18:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.851 18:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:27.851 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.851 [2024-04-25 18:08:25.725935] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:27.851 [2024-04-25 18:08:25.726008] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.110 [2024-04-25 18:08:25.861448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.110 [2024-04-25 18:08:25.964399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.110 [2024-04-25 18:08:25.964566] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.110 [2024-04-25 18:08:25.964583] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.110 [2024-04-25 18:08:25.964599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.110 [2024-04-25 18:08:25.964778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.110 [2024-04-25 18:08:25.965445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.110 [2024-04-25 18:08:25.965452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.047 18:08:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:29.047 18:08:26 -- common/autotest_common.sh@852 -- # return 0 00:13:29.047 18:08:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.047 18:08:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:29.047 18:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.047 18:08:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.047 18:08:26 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.306 [2024-04-25 18:08:27.022430] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.306 18:08:27 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.564 18:08:27 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:29.564 18:08:27 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.823 18:08:27 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:29.823 18:08:27 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:30.082 18:08:27 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:30.342 18:08:28 -- target/nvmf_lvol.sh@29 -- # lvs=601fb497-0e7d-4598-9395-ca78ee0fd60f 00:13:30.342 18:08:28 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 601fb497-0e7d-4598-9395-ca78ee0fd60f lvol 20 00:13:30.601 18:08:28 -- target/nvmf_lvol.sh@32 -- # lvol=42a7e32e-d298-42e2-ac14-422a2b6c6d35 00:13:30.601 18:08:28 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.860 18:08:28 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42a7e32e-d298-42e2-ac14-422a2b6c6d35 00:13:31.118 18:08:28 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:31.377 [2024-04-25 18:08:29.117245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.377 18:08:29 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.635 18:08:29 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:31.635 18:08:29 -- target/nvmf_lvol.sh@42 -- # perf_pid=71031 00:13:31.635 18:08:29 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:32.597 18:08:30 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 42a7e32e-d298-42e2-ac14-422a2b6c6d35 MY_SNAPSHOT 00:13:32.856 18:08:30 -- target/nvmf_lvol.sh@47 -- # snapshot=333d953a-0400-4719-a6c3-ae61fa2a6f4d 00:13:32.856 18:08:30 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 42a7e32e-d298-42e2-ac14-422a2b6c6d35 30 00:13:33.114 18:08:30 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 333d953a-0400-4719-a6c3-ae61fa2a6f4d MY_CLONE 00:13:33.372 18:08:31 -- target/nvmf_lvol.sh@49 -- # clone=51ef394c-a89f-40b5-8194-f1580a7256a1 00:13:33.372 18:08:31 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 51ef394c-a89f-40b5-8194-f1580a7256a1 00:13:33.938 18:08:31 -- target/nvmf_lvol.sh@53 -- # wait 71031 00:13:42.050 Initializing NVMe Controllers 00:13:42.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:42.050 Controller IO queue size 128, less than required. 00:13:42.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:42.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:42.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:42.050 Initialization complete. Launching workers. 00:13:42.050 ======================================================== 00:13:42.050 Latency(us) 00:13:42.050 Device Information : IOPS MiB/s Average min max 00:13:42.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8610.50 33.63 14871.90 666.62 74398.45 00:13:42.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8359.10 32.65 15325.70 3455.45 73644.96 00:13:42.050 ======================================================== 00:13:42.050 Total : 16969.60 66.29 15095.44 666.62 74398.45 00:13:42.050 00:13:42.050 18:08:39 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.050 18:08:39 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 42a7e32e-d298-42e2-ac14-422a2b6c6d35 00:13:42.308 18:08:40 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 601fb497-0e7d-4598-9395-ca78ee0fd60f 00:13:42.566 18:08:40 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:42.566 18:08:40 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:42.566 18:08:40 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:42.566 18:08:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:42.566 18:08:40 -- nvmf/common.sh@116 -- # sync 00:13:42.566 18:08:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:42.566 18:08:40 -- nvmf/common.sh@119 -- # set +e 00:13:42.566 18:08:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:42.566 18:08:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:42.566 rmmod nvme_tcp 00:13:42.566 rmmod nvme_fabrics 00:13:42.566 rmmod nvme_keyring 00:13:42.825 18:08:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:42.825 18:08:40 -- nvmf/common.sh@123 -- # set -e 00:13:42.825 18:08:40 -- nvmf/common.sh@124 -- # return 0 00:13:42.825 18:08:40 -- nvmf/common.sh@477 -- # '[' -n 70883 ']' 00:13:42.825 18:08:40 -- nvmf/common.sh@478 -- # killprocess 70883 00:13:42.825 18:08:40 -- common/autotest_common.sh@926 -- # '[' -z 70883 ']' 00:13:42.825 18:08:40 -- common/autotest_common.sh@930 -- # kill -0 70883 00:13:42.825 18:08:40 -- common/autotest_common.sh@931 -- # uname 00:13:42.825 18:08:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:42.825 18:08:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70883 00:13:42.825 18:08:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:42.825 18:08:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:42.825 18:08:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70883' 00:13:42.825 killing process with pid 70883 00:13:42.825 18:08:40 -- common/autotest_common.sh@945 -- # kill 70883 00:13:42.825 18:08:40 -- common/autotest_common.sh@950 -- # wait 70883 00:13:43.084 18:08:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:43.084 18:08:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:43.084 18:08:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:43.084 18:08:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.084 18:08:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:43.084 18:08:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.084 18:08:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.084 18:08:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.084 18:08:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:43.084 00:13:43.084 real 0m15.644s 00:13:43.084 user 1m5.500s 00:13:43.084 sys 0m3.686s 00:13:43.084 18:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.084 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 ************************************ 00:13:43.084 END TEST nvmf_lvol 00:13:43.084 ************************************ 00:13:43.084 18:08:40 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:43.084 18:08:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.084 18:08:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.084 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 ************************************ 00:13:43.084 START TEST nvmf_lvs_grow 00:13:43.084 ************************************ 00:13:43.084 18:08:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:43.084 * Looking for test storage... 00:13:43.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.084 18:08:41 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.084 18:08:41 -- nvmf/common.sh@7 -- # uname -s 00:13:43.084 18:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.084 18:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.084 18:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.084 18:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.084 18:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.084 18:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.084 18:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.084 18:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.084 18:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.084 18:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.084 18:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:43.084 18:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:13:43.084 18:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.084 18:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.343 18:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.343 18:08:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.343 18:08:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.343 18:08:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.343 18:08:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.343 18:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.343 18:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.343 18:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.343 18:08:41 -- paths/export.sh@5 -- # export PATH 00:13:43.343 18:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.343 18:08:41 -- nvmf/common.sh@46 -- # : 0 00:13:43.343 18:08:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.343 18:08:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.343 18:08:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.343 18:08:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.343 18:08:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.343 18:08:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.343 18:08:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.343 18:08:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.343 18:08:41 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.343 18:08:41 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.343 18:08:41 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:43.343 18:08:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.343 18:08:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.343 18:08:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.343 18:08:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.343 18:08:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.343 18:08:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.343 18:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.343 18:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.343 18:08:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:43.343 18:08:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:43.343 18:08:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:43.343 18:08:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:43.343 18:08:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:43.343 18:08:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:43.343 18:08:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.343 18:08:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.343 18:08:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.343 18:08:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:43.343 18:08:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.343 18:08:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.343 18:08:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.343 18:08:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.343 18:08:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.343 18:08:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.343 18:08:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.344 18:08:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.344 18:08:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:43.344 18:08:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:43.344 Cannot find device "nvmf_tgt_br" 00:13:43.344 18:08:41 -- nvmf/common.sh@154 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.344 Cannot find device "nvmf_tgt_br2" 00:13:43.344 18:08:41 -- nvmf/common.sh@155 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:43.344 18:08:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:43.344 Cannot find device "nvmf_tgt_br" 00:13:43.344 18:08:41 -- nvmf/common.sh@157 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:43.344 Cannot find device "nvmf_tgt_br2" 00:13:43.344 18:08:41 -- nvmf/common.sh@158 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:43.344 18:08:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:43.344 18:08:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.344 18:08:41 -- nvmf/common.sh@161 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.344 18:08:41 -- nvmf/common.sh@162 -- # true 00:13:43.344 18:08:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.344 18:08:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.344 18:08:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.344 18:08:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.344 18:08:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.344 18:08:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.344 18:08:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.344 18:08:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.344 18:08:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.344 18:08:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:43.344 18:08:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:43.344 18:08:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:43.344 18:08:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:43.344 18:08:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.344 18:08:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.344 18:08:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.344 18:08:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:43.344 18:08:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:43.602 18:08:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.602 18:08:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.602 18:08:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.602 18:08:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.602 18:08:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.602 18:08:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:43.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:43.602 00:13:43.602 --- 10.0.0.2 ping statistics --- 00:13:43.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.602 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:43.602 18:08:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:43.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:13:43.602 00:13:43.602 --- 10.0.0.3 ping statistics --- 00:13:43.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.602 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:43.602 18:08:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:43.602 00:13:43.602 --- 10.0.0.1 ping statistics --- 00:13:43.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.602 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:43.602 18:08:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.602 18:08:41 -- nvmf/common.sh@421 -- # return 0 00:13:43.602 18:08:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:43.602 18:08:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.602 18:08:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:43.602 18:08:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:43.602 18:08:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.602 18:08:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:43.602 18:08:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:43.602 18:08:41 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:43.602 18:08:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:43.602 18:08:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:43.602 18:08:41 -- common/autotest_common.sh@10 -- # set +x 00:13:43.602 18:08:41 -- nvmf/common.sh@469 -- # nvmfpid=71385 00:13:43.602 18:08:41 -- nvmf/common.sh@470 -- # waitforlisten 71385 00:13:43.602 18:08:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.602 18:08:41 -- common/autotest_common.sh@819 -- # '[' -z 71385 ']' 00:13:43.602 18:08:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.602 18:08:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:43.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.602 18:08:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.602 18:08:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:43.602 18:08:41 -- common/autotest_common.sh@10 -- # set +x 00:13:43.602 [2024-04-25 18:08:41.437542] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:43.602 [2024-04-25 18:08:41.437636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.861 [2024-04-25 18:08:41.575877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.861 [2024-04-25 18:08:41.676045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.861 [2024-04-25 18:08:41.676231] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.861 [2024-04-25 18:08:41.676258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.861 [2024-04-25 18:08:41.676268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.861 [2024-04-25 18:08:41.676327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.795 18:08:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:44.795 18:08:42 -- common/autotest_common.sh@852 -- # return 0 00:13:44.795 18:08:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:44.795 18:08:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:44.795 18:08:42 -- common/autotest_common.sh@10 -- # set +x 00:13:44.795 18:08:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.795 18:08:42 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.795 [2024-04-25 18:08:42.699819] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.795 18:08:42 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:44.795 18:08:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:44.795 18:08:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.795 18:08:42 -- common/autotest_common.sh@10 -- # set +x 00:13:45.108 ************************************ 00:13:45.108 START TEST lvs_grow_clean 00:13:45.108 ************************************ 00:13:45.108 18:08:42 -- common/autotest_common.sh@1104 -- # lvs_grow 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:45.108 18:08:42 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.380 18:08:43 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:45.380 18:08:43 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:45.380 18:08:43 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c0a44165-845c-4293-b666-f630c347e3f5 00:13:45.380 18:08:43 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:13:45.380 18:08:43 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:45.638 18:08:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:45.638 18:08:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:45.638 18:08:43 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0a44165-845c-4293-b666-f630c347e3f5 lvol 150 00:13:45.896 18:08:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a36d074-5da8-48c7-a1e9-381bb6fc0af4 00:13:45.896 18:08:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:45.896 18:08:43 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:46.159 [2024-04-25 18:08:43.940200] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:46.159 [2024-04-25 18:08:43.940325] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:46.159 true 00:13:46.159 18:08:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:46.159 18:08:43 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:13:46.419 18:08:44 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:46.419 18:08:44 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:46.677 18:08:44 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a36d074-5da8-48c7-a1e9-381bb6fc0af4 00:13:46.935 18:08:44 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:47.193 [2024-04-25 18:08:44.920844] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.193 18:08:44 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.450 18:08:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71548 00:13:47.450 18:08:45 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:47.450 18:08:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.450 18:08:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71548 /var/tmp/bdevperf.sock 00:13:47.450 18:08:45 -- common/autotest_common.sh@819 -- # '[' -z 71548 ']' 00:13:47.450 18:08:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.450 18:08:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.450 18:08:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.450 18:08:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.450 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:47.450 [2024-04-25 18:08:45.255075] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:47.450 [2024-04-25 18:08:45.255175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:13:47.708 [2024-04-25 18:08:45.390591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.708 [2024-04-25 18:08:45.492949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.275 18:08:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.275 18:08:46 -- common/autotest_common.sh@852 -- # return 0 00:13:48.275 18:08:46 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:48.533 Nvme0n1 00:13:48.791 18:08:46 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:48.791 [ 00:13:48.791 { 00:13:48.791 "aliases": [ 00:13:48.791 "5a36d074-5da8-48c7-a1e9-381bb6fc0af4" 00:13:48.791 ], 00:13:48.791 "assigned_rate_limits": { 00:13:48.791 "r_mbytes_per_sec": 0, 00:13:48.791 "rw_ios_per_sec": 0, 00:13:48.791 "rw_mbytes_per_sec": 0, 00:13:48.791 "w_mbytes_per_sec": 0 00:13:48.791 }, 00:13:48.791 "block_size": 4096, 00:13:48.791 "claimed": false, 00:13:48.791 "driver_specific": { 00:13:48.791 "mp_policy": "active_passive", 00:13:48.791 "nvme": [ 00:13:48.791 { 00:13:48.791 "ctrlr_data": { 00:13:48.791 "ana_reporting": false, 00:13:48.791 "cntlid": 1, 00:13:48.791 "firmware_revision": "24.01.1", 00:13:48.791 "model_number": "SPDK bdev Controller", 00:13:48.791 "multi_ctrlr": true, 00:13:48.791 "oacs": { 00:13:48.791 "firmware": 0, 00:13:48.791 "format": 0, 00:13:48.791 "ns_manage": 0, 00:13:48.791 "security": 0 00:13:48.791 }, 00:13:48.791 "serial_number": "SPDK0", 00:13:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:48.791 "vendor_id": "0x8086" 00:13:48.791 }, 00:13:48.791 "ns_data": { 00:13:48.791 "can_share": true, 00:13:48.791 "id": 1 00:13:48.791 }, 00:13:48.791 "trid": { 00:13:48.791 "adrfam": "IPv4", 00:13:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:48.791 "traddr": "10.0.0.2", 00:13:48.791 "trsvcid": "4420", 00:13:48.791 "trtype": "TCP" 00:13:48.791 }, 00:13:48.791 "vs": { 00:13:48.791 "nvme_version": "1.3" 00:13:48.791 } 00:13:48.791 } 00:13:48.791 ] 00:13:48.791 }, 00:13:48.791 "name": "Nvme0n1", 00:13:48.791 "num_blocks": 38912, 00:13:48.791 "product_name": "NVMe disk", 00:13:48.791 "supported_io_types": { 00:13:48.791 "abort": true, 00:13:48.791 "compare": true, 00:13:48.791 "compare_and_write": true, 00:13:48.791 "flush": true, 00:13:48.791 "nvme_admin": true, 00:13:48.791 "nvme_io": true, 00:13:48.791 "read": true, 00:13:48.791 "reset": true, 00:13:48.791 "unmap": true, 00:13:48.791 "write": true, 00:13:48.791 "write_zeroes": true 00:13:48.791 }, 00:13:48.791 "uuid": "5a36d074-5da8-48c7-a1e9-381bb6fc0af4", 00:13:48.791 "zoned": false 00:13:48.791 } 00:13:48.791 ] 00:13:48.791 18:08:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71596 00:13:48.791 18:08:46 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.791 18:08:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:49.050 Running I/O for 10 seconds... 00:13:49.987 Latency(us) 00:13:49.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.987 Nvme0n1 : 1.00 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:13:49.987 =================================================================================================================== 00:13:49.987 Total : 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:13:49.987 00:13:50.922 18:08:48 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c0a44165-845c-4293-b666-f630c347e3f5 00:13:50.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.922 Nvme0n1 : 2.00 7256.00 28.34 0.00 0.00 0.00 0.00 0.00 00:13:50.922 =================================================================================================================== 00:13:50.922 Total : 7256.00 28.34 0.00 0.00 0.00 0.00 0.00 00:13:50.922 00:13:51.181 true 00:13:51.181 18:08:49 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:13:51.181 18:08:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:51.441 18:08:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:51.441 18:08:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:51.441 18:08:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 71596 00:13:52.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.008 Nvme0n1 : 3.00 7369.33 28.79 0.00 0.00 0.00 0.00 0.00 00:13:52.008 =================================================================================================================== 00:13:52.008 Total : 7369.33 28.79 0.00 0.00 0.00 0.00 0.00 00:13:52.008 00:13:52.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.945 Nvme0n1 : 4.00 7445.50 29.08 0.00 0.00 0.00 0.00 0.00 00:13:52.945 =================================================================================================================== 00:13:52.945 Total : 7445.50 29.08 0.00 0.00 0.00 0.00 0.00 00:13:52.945 00:13:53.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.881 Nvme0n1 : 5.00 7486.00 29.24 0.00 0.00 0.00 0.00 0.00 00:13:53.881 =================================================================================================================== 00:13:53.881 Total : 7486.00 29.24 0.00 0.00 0.00 0.00 0.00 00:13:53.881 00:13:55.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.257 Nvme0n1 : 6.00 7496.83 29.28 0.00 0.00 0.00 0.00 0.00 00:13:55.257 =================================================================================================================== 00:13:55.257 Total : 7496.83 29.28 0.00 0.00 0.00 0.00 0.00 00:13:55.257 00:13:56.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.205 Nvme0n1 : 7.00 7487.57 29.25 0.00 0.00 0.00 0.00 0.00 00:13:56.205 =================================================================================================================== 00:13:56.205 Total : 7487.57 29.25 0.00 0.00 0.00 0.00 0.00 00:13:56.205 00:13:57.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.142 Nvme0n1 : 8.00 7449.25 29.10 0.00 0.00 0.00 0.00 0.00 00:13:57.142 =================================================================================================================== 00:13:57.142 Total : 7449.25 29.10 0.00 0.00 0.00 0.00 0.00 00:13:57.142 00:13:58.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.077 Nvme0n1 : 9.00 7486.33 29.24 0.00 0.00 0.00 0.00 0.00 00:13:58.077 =================================================================================================================== 00:13:58.077 Total : 7486.33 29.24 0.00 0.00 0.00 0.00 0.00 00:13:58.077 00:13:59.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.012 Nvme0n1 : 10.00 7562.80 29.54 0.00 0.00 0.00 0.00 0.00 00:13:59.012 =================================================================================================================== 00:13:59.012 Total : 7562.80 29.54 0.00 0.00 0.00 0.00 0.00 00:13:59.012 00:13:59.012 00:13:59.012 Latency(us) 00:13:59.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.012 Nvme0n1 : 10.01 7569.24 29.57 0.00 0.00 16905.35 7149.38 46470.98 00:13:59.012 =================================================================================================================== 00:13:59.012 Total : 7569.24 29.57 0.00 0.00 16905.35 7149.38 46470.98 00:13:59.012 0 00:13:59.012 18:08:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71548 00:13:59.012 18:08:56 -- common/autotest_common.sh@926 -- # '[' -z 71548 ']' 00:13:59.012 18:08:56 -- common/autotest_common.sh@930 -- # kill -0 71548 00:13:59.012 18:08:56 -- common/autotest_common.sh@931 -- # uname 00:13:59.012 18:08:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:59.012 18:08:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71548 00:13:59.012 18:08:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:59.012 18:08:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:59.012 killing process with pid 71548 00:13:59.012 18:08:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71548' 00:13:59.012 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.012 00:13:59.012 Latency(us) 00:13:59.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.013 =================================================================================================================== 00:13:59.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.013 18:08:56 -- common/autotest_common.sh@945 -- # kill 71548 00:13:59.013 18:08:56 -- common/autotest_common.sh@950 -- # wait 71548 00:13:59.271 18:08:57 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.529 18:08:57 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:13:59.529 18:08:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:59.788 18:08:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:59.788 18:08:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:13:59.788 18:08:57 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:00.047 [2024-04-25 18:08:57.818899] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:00.047 18:08:57 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:00.047 18:08:57 -- common/autotest_common.sh@640 -- # local es=0 00:14:00.047 18:08:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:00.047 18:08:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.047 18:08:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.047 18:08:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.047 18:08:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.047 18:08:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.047 18:08:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.047 18:08:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.047 18:08:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:00.047 18:08:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:00.306 2024/04/25 18:08:58 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c0a44165-845c-4293-b666-f630c347e3f5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:00.306 request: 00:14:00.306 { 00:14:00.306 "method": "bdev_lvol_get_lvstores", 00:14:00.306 "params": { 00:14:00.306 "uuid": "c0a44165-845c-4293-b666-f630c347e3f5" 00:14:00.306 } 00:14:00.306 } 00:14:00.306 Got JSON-RPC error response 00:14:00.306 GoRPCClient: error on JSON-RPC call 00:14:00.306 18:08:58 -- common/autotest_common.sh@643 -- # es=1 00:14:00.306 18:08:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:00.306 18:08:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:00.306 18:08:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:00.306 18:08:58 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:00.564 aio_bdev 00:14:00.564 18:08:58 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5a36d074-5da8-48c7-a1e9-381bb6fc0af4 00:14:00.565 18:08:58 -- common/autotest_common.sh@887 -- # local bdev_name=5a36d074-5da8-48c7-a1e9-381bb6fc0af4 00:14:00.565 18:08:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:00.565 18:08:58 -- common/autotest_common.sh@889 -- # local i 00:14:00.565 18:08:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:00.565 18:08:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:00.565 18:08:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:00.823 18:08:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5a36d074-5da8-48c7-a1e9-381bb6fc0af4 -t 2000 00:14:01.082 [ 00:14:01.082 { 00:14:01.082 "aliases": [ 00:14:01.082 "lvs/lvol" 00:14:01.082 ], 00:14:01.082 "assigned_rate_limits": { 00:14:01.082 "r_mbytes_per_sec": 0, 00:14:01.082 "rw_ios_per_sec": 0, 00:14:01.082 "rw_mbytes_per_sec": 0, 00:14:01.082 "w_mbytes_per_sec": 0 00:14:01.082 }, 00:14:01.082 "block_size": 4096, 00:14:01.082 "claimed": false, 00:14:01.082 "driver_specific": { 00:14:01.082 "lvol": { 00:14:01.082 "base_bdev": "aio_bdev", 00:14:01.082 "clone": false, 00:14:01.082 "esnap_clone": false, 00:14:01.083 "lvol_store_uuid": "c0a44165-845c-4293-b666-f630c347e3f5", 00:14:01.083 "snapshot": false, 00:14:01.083 "thin_provision": false 00:14:01.083 } 00:14:01.083 }, 00:14:01.083 "name": "5a36d074-5da8-48c7-a1e9-381bb6fc0af4", 00:14:01.083 "num_blocks": 38912, 00:14:01.083 "product_name": "Logical Volume", 00:14:01.083 "supported_io_types": { 00:14:01.083 "abort": false, 00:14:01.083 "compare": false, 00:14:01.083 "compare_and_write": false, 00:14:01.083 "flush": false, 00:14:01.083 "nvme_admin": false, 00:14:01.083 "nvme_io": false, 00:14:01.083 "read": true, 00:14:01.083 "reset": true, 00:14:01.083 "unmap": true, 00:14:01.083 "write": true, 00:14:01.083 "write_zeroes": true 00:14:01.083 }, 00:14:01.083 "uuid": "5a36d074-5da8-48c7-a1e9-381bb6fc0af4", 00:14:01.083 "zoned": false 00:14:01.083 } 00:14:01.083 ] 00:14:01.083 18:08:58 -- common/autotest_common.sh@895 -- # return 0 00:14:01.083 18:08:58 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:01.083 18:08:58 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:01.342 18:08:59 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:01.342 18:08:59 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:01.342 18:08:59 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:01.342 18:08:59 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:01.342 18:08:59 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5a36d074-5da8-48c7-a1e9-381bb6fc0af4 00:14:01.600 18:08:59 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0a44165-845c-4293-b666-f630c347e3f5 00:14:01.859 18:08:59 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:02.117 18:08:59 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.684 ************************************ 00:14:02.684 END TEST lvs_grow_clean 00:14:02.684 ************************************ 00:14:02.684 00:14:02.684 real 0m17.639s 00:14:02.684 user 0m16.955s 00:14:02.684 sys 0m2.085s 00:14:02.684 18:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.684 18:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:02.684 18:09:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.684 18:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.684 18:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 ************************************ 00:14:02.684 START TEST lvs_grow_dirty 00:14:02.684 ************************************ 00:14:02.684 18:09:00 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:02.684 18:09:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:02.685 18:09:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.685 18:09:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.685 18:09:00 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.944 18:09:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:02.944 18:09:00 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:03.203 18:09:00 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c152409c-3909-48a0-96d4-f9ff955c2996 00:14:03.203 18:09:00 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:03.203 18:09:00 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:03.461 18:09:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:03.461 18:09:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:03.461 18:09:01 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c152409c-3909-48a0-96d4-f9ff955c2996 lvol 150 00:14:03.719 18:09:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:03.719 18:09:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:03.719 18:09:01 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:03.978 [2024-04-25 18:09:01.721296] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:03.978 [2024-04-25 18:09:01.721408] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:03.978 true 00:14:03.978 18:09:01 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:03.978 18:09:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:04.238 18:09:02 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:04.238 18:09:02 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:04.499 18:09:02 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:04.757 18:09:02 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.757 18:09:02 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:05.016 18:09:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71980 00:14:05.016 18:09:02 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:05.016 18:09:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:05.016 18:09:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71980 /var/tmp/bdevperf.sock 00:14:05.016 18:09:02 -- common/autotest_common.sh@819 -- # '[' -z 71980 ']' 00:14:05.016 18:09:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.016 18:09:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:05.016 18:09:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.016 18:09:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:05.016 18:09:02 -- common/autotest_common.sh@10 -- # set +x 00:14:05.274 [2024-04-25 18:09:02.963792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:05.274 [2024-04-25 18:09:02.963884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71980 ] 00:14:05.275 [2024-04-25 18:09:03.094401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.532 [2024-04-25 18:09:03.226915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.099 18:09:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:06.099 18:09:03 -- common/autotest_common.sh@852 -- # return 0 00:14:06.099 18:09:03 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:06.357 Nvme0n1 00:14:06.357 18:09:04 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:06.615 [ 00:14:06.615 { 00:14:06.615 "aliases": [ 00:14:06.615 "460ada45-8ded-497d-9a8e-fa04e26abbc2" 00:14:06.615 ], 00:14:06.615 "assigned_rate_limits": { 00:14:06.615 "r_mbytes_per_sec": 0, 00:14:06.615 "rw_ios_per_sec": 0, 00:14:06.615 "rw_mbytes_per_sec": 0, 00:14:06.615 "w_mbytes_per_sec": 0 00:14:06.615 }, 00:14:06.615 "block_size": 4096, 00:14:06.615 "claimed": false, 00:14:06.615 "driver_specific": { 00:14:06.615 "mp_policy": "active_passive", 00:14:06.615 "nvme": [ 00:14:06.615 { 00:14:06.615 "ctrlr_data": { 00:14:06.615 "ana_reporting": false, 00:14:06.615 "cntlid": 1, 00:14:06.615 "firmware_revision": "24.01.1", 00:14:06.615 "model_number": "SPDK bdev Controller", 00:14:06.615 "multi_ctrlr": true, 00:14:06.615 "oacs": { 00:14:06.615 "firmware": 0, 00:14:06.615 "format": 0, 00:14:06.615 "ns_manage": 0, 00:14:06.615 "security": 0 00:14:06.615 }, 00:14:06.615 "serial_number": "SPDK0", 00:14:06.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.615 "vendor_id": "0x8086" 00:14:06.615 }, 00:14:06.615 "ns_data": { 00:14:06.615 "can_share": true, 00:14:06.615 "id": 1 00:14:06.615 }, 00:14:06.615 "trid": { 00:14:06.615 "adrfam": "IPv4", 00:14:06.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.615 "traddr": "10.0.0.2", 00:14:06.615 "trsvcid": "4420", 00:14:06.615 "trtype": "TCP" 00:14:06.615 }, 00:14:06.615 "vs": { 00:14:06.615 "nvme_version": "1.3" 00:14:06.615 } 00:14:06.615 } 00:14:06.615 ] 00:14:06.615 }, 00:14:06.615 "name": "Nvme0n1", 00:14:06.615 "num_blocks": 38912, 00:14:06.615 "product_name": "NVMe disk", 00:14:06.615 "supported_io_types": { 00:14:06.615 "abort": true, 00:14:06.615 "compare": true, 00:14:06.615 "compare_and_write": true, 00:14:06.615 "flush": true, 00:14:06.615 "nvme_admin": true, 00:14:06.615 "nvme_io": true, 00:14:06.615 "read": true, 00:14:06.615 "reset": true, 00:14:06.615 "unmap": true, 00:14:06.615 "write": true, 00:14:06.615 "write_zeroes": true 00:14:06.615 }, 00:14:06.615 "uuid": "460ada45-8ded-497d-9a8e-fa04e26abbc2", 00:14:06.615 "zoned": false 00:14:06.615 } 00:14:06.615 ] 00:14:06.615 18:09:04 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72023 00:14:06.615 18:09:04 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:06.615 18:09:04 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:06.873 Running I/O for 10 seconds... 00:14:07.807 Latency(us) 00:14:07.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.807 Nvme0n1 : 1.00 8199.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:07.807 =================================================================================================================== 00:14:07.807 Total : 8199.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:07.807 00:14:08.744 18:09:06 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:08.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.744 Nvme0n1 : 2.00 8218.00 32.10 0.00 0.00 0.00 0.00 0.00 00:14:08.744 =================================================================================================================== 00:14:08.744 Total : 8218.00 32.10 0.00 0.00 0.00 0.00 0.00 00:14:08.744 00:14:09.003 true 00:14:09.003 18:09:06 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:09.003 18:09:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:09.284 18:09:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:09.284 18:09:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:09.284 18:09:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 72023 00:14:09.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.852 Nvme0n1 : 3.00 8227.33 32.14 0.00 0.00 0.00 0.00 0.00 00:14:09.852 =================================================================================================================== 00:14:09.852 Total : 8227.33 32.14 0.00 0.00 0.00 0.00 0.00 00:14:09.852 00:14:10.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.789 Nvme0n1 : 4.00 8234.50 32.17 0.00 0.00 0.00 0.00 0.00 00:14:10.789 =================================================================================================================== 00:14:10.789 Total : 8234.50 32.17 0.00 0.00 0.00 0.00 0.00 00:14:10.789 00:14:11.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.726 Nvme0n1 : 5.00 8225.00 32.13 0.00 0.00 0.00 0.00 0.00 00:14:11.726 =================================================================================================================== 00:14:11.726 Total : 8225.00 32.13 0.00 0.00 0.00 0.00 0.00 00:14:11.726 00:14:13.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.104 Nvme0n1 : 6.00 8243.50 32.20 0.00 0.00 0.00 0.00 0.00 00:14:13.104 =================================================================================================================== 00:14:13.104 Total : 8243.50 32.20 0.00 0.00 0.00 0.00 0.00 00:14:13.104 00:14:13.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.694 Nvme0n1 : 7.00 8250.86 32.23 0.00 0.00 0.00 0.00 0.00 00:14:13.694 =================================================================================================================== 00:14:13.694 Total : 8250.86 32.23 0.00 0.00 0.00 0.00 0.00 00:14:13.694 00:14:15.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.070 Nvme0n1 : 8.00 8067.00 31.51 0.00 0.00 0.00 0.00 0.00 00:14:15.070 =================================================================================================================== 00:14:15.070 Total : 8067.00 31.51 0.00 0.00 0.00 0.00 0.00 00:14:15.070 00:14:16.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.020 Nvme0n1 : 9.00 8063.89 31.50 0.00 0.00 0.00 0.00 0.00 00:14:16.020 =================================================================================================================== 00:14:16.020 Total : 8063.89 31.50 0.00 0.00 0.00 0.00 0.00 00:14:16.020 00:14:16.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.954 Nvme0n1 : 10.00 8047.20 31.43 0.00 0.00 0.00 0.00 0.00 00:14:16.954 =================================================================================================================== 00:14:16.954 Total : 8047.20 31.43 0.00 0.00 0.00 0.00 0.00 00:14:16.954 00:14:16.954 00:14:16.954 Latency(us) 00:14:16.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.954 Nvme0n1 : 10.01 8052.14 31.45 0.00 0.00 15887.73 4170.47 170631.91 00:14:16.954 =================================================================================================================== 00:14:16.954 Total : 8052.14 31.45 0.00 0.00 15887.73 4170.47 170631.91 00:14:16.954 0 00:14:16.954 18:09:14 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71980 00:14:16.954 18:09:14 -- common/autotest_common.sh@926 -- # '[' -z 71980 ']' 00:14:16.954 18:09:14 -- common/autotest_common.sh@930 -- # kill -0 71980 00:14:16.954 18:09:14 -- common/autotest_common.sh@931 -- # uname 00:14:16.954 18:09:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.954 18:09:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71980 00:14:16.954 18:09:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:16.954 18:09:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:16.954 killing process with pid 71980 00:14:16.954 18:09:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71980' 00:14:16.954 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.954 00:14:16.954 Latency(us) 00:14:16.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.954 =================================================================================================================== 00:14:16.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.954 18:09:14 -- common/autotest_common.sh@945 -- # kill 71980 00:14:16.954 18:09:14 -- common/autotest_common.sh@950 -- # wait 71980 00:14:17.212 18:09:14 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:17.470 18:09:15 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:17.470 18:09:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 71385 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@74 -- # wait 71385 00:14:17.728 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 71385 Killed "${NVMF_APP[@]}" "$@" 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:17.728 18:09:15 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:17.728 18:09:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:17.728 18:09:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:17.728 18:09:15 -- common/autotest_common.sh@10 -- # set +x 00:14:17.728 18:09:15 -- nvmf/common.sh@469 -- # nvmfpid=72175 00:14:17.728 18:09:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:17.728 18:09:15 -- nvmf/common.sh@470 -- # waitforlisten 72175 00:14:17.728 18:09:15 -- common/autotest_common.sh@819 -- # '[' -z 72175 ']' 00:14:17.728 18:09:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.728 18:09:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:17.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.728 18:09:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.728 18:09:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:17.728 18:09:15 -- common/autotest_common.sh@10 -- # set +x 00:14:17.728 [2024-04-25 18:09:15.560850] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:17.728 [2024-04-25 18:09:15.560939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.987 [2024-04-25 18:09:15.695921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.987 [2024-04-25 18:09:15.826120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:17.987 [2024-04-25 18:09:15.826332] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.987 [2024-04-25 18:09:15.826350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.987 [2024-04-25 18:09:15.826359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.987 [2024-04-25 18:09:15.826397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.922 18:09:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:18.922 18:09:16 -- common/autotest_common.sh@852 -- # return 0 00:14:18.922 18:09:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:18.922 18:09:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:18.922 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.922 18:09:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.922 18:09:16 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:18.922 [2024-04-25 18:09:16.774040] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:18.922 [2024-04-25 18:09:16.774469] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:18.922 [2024-04-25 18:09:16.774678] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:18.922 18:09:16 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:18.922 18:09:16 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:18.922 18:09:16 -- common/autotest_common.sh@887 -- # local bdev_name=460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:18.922 18:09:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:18.922 18:09:16 -- common/autotest_common.sh@889 -- # local i 00:14:18.922 18:09:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:18.922 18:09:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:18.922 18:09:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:19.181 18:09:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 460ada45-8ded-497d-9a8e-fa04e26abbc2 -t 2000 00:14:19.439 [ 00:14:19.439 { 00:14:19.439 "aliases": [ 00:14:19.439 "lvs/lvol" 00:14:19.439 ], 00:14:19.439 "assigned_rate_limits": { 00:14:19.439 "r_mbytes_per_sec": 0, 00:14:19.439 "rw_ios_per_sec": 0, 00:14:19.439 "rw_mbytes_per_sec": 0, 00:14:19.439 "w_mbytes_per_sec": 0 00:14:19.439 }, 00:14:19.439 "block_size": 4096, 00:14:19.439 "claimed": false, 00:14:19.439 "driver_specific": { 00:14:19.439 "lvol": { 00:14:19.439 "base_bdev": "aio_bdev", 00:14:19.439 "clone": false, 00:14:19.439 "esnap_clone": false, 00:14:19.439 "lvol_store_uuid": "c152409c-3909-48a0-96d4-f9ff955c2996", 00:14:19.439 "snapshot": false, 00:14:19.439 "thin_provision": false 00:14:19.440 } 00:14:19.440 }, 00:14:19.440 "name": "460ada45-8ded-497d-9a8e-fa04e26abbc2", 00:14:19.440 "num_blocks": 38912, 00:14:19.440 "product_name": "Logical Volume", 00:14:19.440 "supported_io_types": { 00:14:19.440 "abort": false, 00:14:19.440 "compare": false, 00:14:19.440 "compare_and_write": false, 00:14:19.440 "flush": false, 00:14:19.440 "nvme_admin": false, 00:14:19.440 "nvme_io": false, 00:14:19.440 "read": true, 00:14:19.440 "reset": true, 00:14:19.440 "unmap": true, 00:14:19.440 "write": true, 00:14:19.440 "write_zeroes": true 00:14:19.440 }, 00:14:19.440 "uuid": "460ada45-8ded-497d-9a8e-fa04e26abbc2", 00:14:19.440 "zoned": false 00:14:19.440 } 00:14:19.440 ] 00:14:19.440 18:09:17 -- common/autotest_common.sh@895 -- # return 0 00:14:19.440 18:09:17 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:19.440 18:09:17 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:19.698 18:09:17 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:19.698 18:09:17 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:19.698 18:09:17 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:19.956 18:09:17 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:19.956 18:09:17 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:20.215 [2024-04-25 18:09:17.983065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:20.215 18:09:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:20.215 18:09:18 -- common/autotest_common.sh@640 -- # local es=0 00:14:20.215 18:09:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:20.215 18:09:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.215 18:09:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.215 18:09:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.215 18:09:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.215 18:09:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.215 18:09:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.215 18:09:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.215 18:09:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:20.215 18:09:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:20.473 2024/04/25 18:09:18 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c152409c-3909-48a0-96d4-f9ff955c2996], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:20.473 request: 00:14:20.473 { 00:14:20.473 "method": "bdev_lvol_get_lvstores", 00:14:20.473 "params": { 00:14:20.473 "uuid": "c152409c-3909-48a0-96d4-f9ff955c2996" 00:14:20.473 } 00:14:20.473 } 00:14:20.473 Got JSON-RPC error response 00:14:20.473 GoRPCClient: error on JSON-RPC call 00:14:20.473 18:09:18 -- common/autotest_common.sh@643 -- # es=1 00:14:20.473 18:09:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:20.473 18:09:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:20.473 18:09:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:20.473 18:09:18 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:20.732 aio_bdev 00:14:20.732 18:09:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:20.732 18:09:18 -- common/autotest_common.sh@887 -- # local bdev_name=460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:20.732 18:09:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:20.732 18:09:18 -- common/autotest_common.sh@889 -- # local i 00:14:20.732 18:09:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:20.732 18:09:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:20.732 18:09:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:20.990 18:09:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 460ada45-8ded-497d-9a8e-fa04e26abbc2 -t 2000 00:14:20.990 [ 00:14:20.990 { 00:14:20.990 "aliases": [ 00:14:20.990 "lvs/lvol" 00:14:20.990 ], 00:14:20.990 "assigned_rate_limits": { 00:14:20.990 "r_mbytes_per_sec": 0, 00:14:20.990 "rw_ios_per_sec": 0, 00:14:20.990 "rw_mbytes_per_sec": 0, 00:14:20.990 "w_mbytes_per_sec": 0 00:14:20.990 }, 00:14:20.990 "block_size": 4096, 00:14:20.990 "claimed": false, 00:14:20.990 "driver_specific": { 00:14:20.990 "lvol": { 00:14:20.990 "base_bdev": "aio_bdev", 00:14:20.990 "clone": false, 00:14:20.990 "esnap_clone": false, 00:14:20.990 "lvol_store_uuid": "c152409c-3909-48a0-96d4-f9ff955c2996", 00:14:20.990 "snapshot": false, 00:14:20.990 "thin_provision": false 00:14:20.990 } 00:14:20.990 }, 00:14:20.990 "name": "460ada45-8ded-497d-9a8e-fa04e26abbc2", 00:14:20.990 "num_blocks": 38912, 00:14:20.990 "product_name": "Logical Volume", 00:14:20.990 "supported_io_types": { 00:14:20.990 "abort": false, 00:14:20.990 "compare": false, 00:14:20.990 "compare_and_write": false, 00:14:20.990 "flush": false, 00:14:20.990 "nvme_admin": false, 00:14:20.990 "nvme_io": false, 00:14:20.990 "read": true, 00:14:20.990 "reset": true, 00:14:20.990 "unmap": true, 00:14:20.990 "write": true, 00:14:20.990 "write_zeroes": true 00:14:20.990 }, 00:14:20.990 "uuid": "460ada45-8ded-497d-9a8e-fa04e26abbc2", 00:14:20.991 "zoned": false 00:14:20.991 } 00:14:20.991 ] 00:14:21.249 18:09:18 -- common/autotest_common.sh@895 -- # return 0 00:14:21.249 18:09:18 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:21.249 18:09:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:21.249 18:09:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:21.249 18:09:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:21.249 18:09:19 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:21.513 18:09:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:21.513 18:09:19 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 460ada45-8ded-497d-9a8e-fa04e26abbc2 00:14:21.792 18:09:19 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c152409c-3909-48a0-96d4-f9ff955c2996 00:14:22.050 18:09:19 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:22.308 18:09:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:22.566 00:14:22.566 real 0m19.990s 00:14:22.566 user 0m40.576s 00:14:22.566 sys 0m8.893s 00:14:22.566 18:09:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.566 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:14:22.566 ************************************ 00:14:22.566 END TEST lvs_grow_dirty 00:14:22.566 ************************************ 00:14:22.566 18:09:20 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:22.566 18:09:20 -- common/autotest_common.sh@796 -- # type=--id 00:14:22.566 18:09:20 -- common/autotest_common.sh@797 -- # id=0 00:14:22.566 18:09:20 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:22.566 18:09:20 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:22.566 18:09:20 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:22.566 18:09:20 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:22.566 18:09:20 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:22.566 18:09:20 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:22.566 nvmf_trace.0 00:14:22.566 18:09:20 -- common/autotest_common.sh@811 -- # return 0 00:14:22.566 18:09:20 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:22.566 18:09:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.566 18:09:20 -- nvmf/common.sh@116 -- # sync 00:14:22.825 18:09:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:22.825 18:09:20 -- nvmf/common.sh@119 -- # set +e 00:14:22.825 18:09:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.825 18:09:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:22.825 rmmod nvme_tcp 00:14:22.825 rmmod nvme_fabrics 00:14:22.825 rmmod nvme_keyring 00:14:22.825 18:09:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:22.825 18:09:20 -- nvmf/common.sh@123 -- # set -e 00:14:22.825 18:09:20 -- nvmf/common.sh@124 -- # return 0 00:14:22.825 18:09:20 -- nvmf/common.sh@477 -- # '[' -n 72175 ']' 00:14:22.825 18:09:20 -- nvmf/common.sh@478 -- # killprocess 72175 00:14:22.825 18:09:20 -- common/autotest_common.sh@926 -- # '[' -z 72175 ']' 00:14:22.825 18:09:20 -- common/autotest_common.sh@930 -- # kill -0 72175 00:14:22.825 18:09:20 -- common/autotest_common.sh@931 -- # uname 00:14:22.825 18:09:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.825 18:09:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72175 00:14:22.825 18:09:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:22.825 18:09:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:22.825 18:09:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72175' 00:14:22.825 killing process with pid 72175 00:14:22.825 18:09:20 -- common/autotest_common.sh@945 -- # kill 72175 00:14:22.825 18:09:20 -- common/autotest_common.sh@950 -- # wait 72175 00:14:23.393 18:09:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:23.393 18:09:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:23.393 18:09:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:23.393 18:09:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.393 18:09:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.393 18:09:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.393 18:09:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:23.393 ************************************ 00:14:23.393 END TEST nvmf_lvs_grow 00:14:23.393 ************************************ 00:14:23.393 00:14:23.393 real 0m40.183s 00:14:23.393 user 1m3.605s 00:14:23.393 sys 0m11.733s 00:14:23.393 18:09:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.393 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.393 18:09:21 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:23.393 18:09:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.393 18:09:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.393 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.393 ************************************ 00:14:23.393 START TEST nvmf_bdev_io_wait 00:14:23.393 ************************************ 00:14:23.393 18:09:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:23.393 * Looking for test storage... 00:14:23.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.393 18:09:21 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.393 18:09:21 -- nvmf/common.sh@7 -- # uname -s 00:14:23.393 18:09:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.393 18:09:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.393 18:09:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.393 18:09:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.393 18:09:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.393 18:09:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.393 18:09:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.393 18:09:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.393 18:09:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.393 18:09:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:23.393 18:09:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:23.393 18:09:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.393 18:09:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.393 18:09:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.393 18:09:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.393 18:09:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.393 18:09:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.393 18:09:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.393 18:09:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.393 18:09:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.393 18:09:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.393 18:09:21 -- paths/export.sh@5 -- # export PATH 00:14:23.393 18:09:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.393 18:09:21 -- nvmf/common.sh@46 -- # : 0 00:14:23.393 18:09:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.393 18:09:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.393 18:09:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.393 18:09:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.393 18:09:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.393 18:09:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.393 18:09:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.393 18:09:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.393 18:09:21 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.393 18:09:21 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.393 18:09:21 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:23.393 18:09:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.393 18:09:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.393 18:09:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.393 18:09:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.393 18:09:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.393 18:09:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.393 18:09:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.393 18:09:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.393 18:09:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:23.393 18:09:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:23.393 18:09:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.393 18:09:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.393 18:09:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.393 18:09:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:23.393 18:09:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.393 18:09:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.393 18:09:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.393 18:09:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.393 18:09:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.393 18:09:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.393 18:09:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.393 18:09:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.393 18:09:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:23.393 18:09:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:23.393 Cannot find device "nvmf_tgt_br" 00:14:23.393 18:09:21 -- nvmf/common.sh@154 -- # true 00:14:23.393 18:09:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.393 Cannot find device "nvmf_tgt_br2" 00:14:23.393 18:09:21 -- nvmf/common.sh@155 -- # true 00:14:23.393 18:09:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:23.393 18:09:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:23.653 Cannot find device "nvmf_tgt_br" 00:14:23.653 18:09:21 -- nvmf/common.sh@157 -- # true 00:14:23.653 18:09:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:23.653 Cannot find device "nvmf_tgt_br2" 00:14:23.653 18:09:21 -- nvmf/common.sh@158 -- # true 00:14:23.653 18:09:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:23.653 18:09:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:23.653 18:09:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.653 18:09:21 -- nvmf/common.sh@161 -- # true 00:14:23.653 18:09:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.653 18:09:21 -- nvmf/common.sh@162 -- # true 00:14:23.653 18:09:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.653 18:09:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.653 18:09:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.653 18:09:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.653 18:09:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.653 18:09:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.653 18:09:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.653 18:09:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.653 18:09:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.653 18:09:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:23.653 18:09:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:23.653 18:09:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:23.653 18:09:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:23.653 18:09:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.653 18:09:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.653 18:09:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.653 18:09:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:23.653 18:09:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:23.653 18:09:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.653 18:09:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.653 18:09:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.653 18:09:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.653 18:09:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.653 18:09:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:23.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:23.653 00:14:23.653 --- 10.0.0.2 ping statistics --- 00:14:23.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.653 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:23.653 18:09:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:23.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:23.653 00:14:23.653 --- 10.0.0.3 ping statistics --- 00:14:23.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.653 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:23.653 18:09:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:23.912 00:14:23.912 --- 10.0.0.1 ping statistics --- 00:14:23.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.912 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:23.912 18:09:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.912 18:09:21 -- nvmf/common.sh@421 -- # return 0 00:14:23.912 18:09:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.912 18:09:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.912 18:09:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:23.912 18:09:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:23.912 18:09:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.912 18:09:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:23.912 18:09:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:23.912 18:09:21 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:23.912 18:09:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.912 18:09:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:23.912 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 18:09:21 -- nvmf/common.sh@469 -- # nvmfpid=72591 00:14:23.912 18:09:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:23.912 18:09:21 -- nvmf/common.sh@470 -- # waitforlisten 72591 00:14:23.912 18:09:21 -- common/autotest_common.sh@819 -- # '[' -z 72591 ']' 00:14:23.912 18:09:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.912 18:09:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.912 18:09:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.912 18:09:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.912 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 [2024-04-25 18:09:21.675346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:23.912 [2024-04-25 18:09:21.675434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.912 [2024-04-25 18:09:21.814862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.171 [2024-04-25 18:09:21.929022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:24.171 [2024-04-25 18:09:21.929200] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.171 [2024-04-25 18:09:21.929214] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.171 [2024-04-25 18:09:21.929223] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.171 [2024-04-25 18:09:21.929407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.171 [2024-04-25 18:09:21.929998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.171 [2024-04-25 18:09:21.930086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.171 [2024-04-25 18:09:21.930097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.739 18:09:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.739 18:09:22 -- common/autotest_common.sh@852 -- # return 0 00:14:24.739 18:09:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:24.739 18:09:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:24.739 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.998 18:09:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.998 18:09:22 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:24.998 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.998 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.998 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 [2024-04-25 18:09:22.804182] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 Malloc0 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.999 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.999 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.999 [2024-04-25 18:09:22.870447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.999 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72644 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@30 -- # READ_PID=72646 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # config=() 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # local subsystem config 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72648 00:14:24.999 18:09:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:24.999 { 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme$subsystem", 00:14:24.999 "trtype": "$TEST_TRANSPORT", 00:14:24.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "$NVMF_PORT", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.999 "hdgst": ${hdgst:-false}, 00:14:24.999 "ddgst": ${ddgst:-false} 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 } 00:14:24.999 EOF 00:14:24.999 )") 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72650 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@35 -- # sync 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # cat 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # config=() 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # local subsystem config 00:14:24.999 18:09:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:24.999 { 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme$subsystem", 00:14:24.999 "trtype": "$TEST_TRANSPORT", 00:14:24.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "$NVMF_PORT", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.999 "hdgst": ${hdgst:-false}, 00:14:24.999 "ddgst": ${ddgst:-false} 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 } 00:14:24.999 EOF 00:14:24.999 )") 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # config=() 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # local subsystem config 00:14:24.999 18:09:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:24.999 { 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme$subsystem", 00:14:24.999 "trtype": "$TEST_TRANSPORT", 00:14:24.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "$NVMF_PORT", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.999 "hdgst": ${hdgst:-false}, 00:14:24.999 "ddgst": ${ddgst:-false} 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 } 00:14:24.999 EOF 00:14:24.999 )") 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # config=() 00:14:24.999 18:09:22 -- nvmf/common.sh@520 -- # local subsystem config 00:14:24.999 18:09:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # cat 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:24.999 { 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme$subsystem", 00:14:24.999 "trtype": "$TEST_TRANSPORT", 00:14:24.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "$NVMF_PORT", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:24.999 "hdgst": ${hdgst:-false}, 00:14:24.999 "ddgst": ${ddgst:-false} 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 } 00:14:24.999 EOF 00:14:24.999 )") 00:14:24.999 18:09:22 -- nvmf/common.sh@544 -- # jq . 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # cat 00:14:24.999 18:09:22 -- nvmf/common.sh@542 -- # cat 00:14:24.999 18:09:22 -- nvmf/common.sh@545 -- # IFS=, 00:14:24.999 18:09:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme1", 00:14:24.999 "trtype": "tcp", 00:14:24.999 "traddr": "10.0.0.2", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "4420", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.999 "hdgst": false, 00:14:24.999 "ddgst": false 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 }' 00:14:24.999 18:09:22 -- nvmf/common.sh@544 -- # jq . 00:14:24.999 18:09:22 -- nvmf/common.sh@545 -- # IFS=, 00:14:24.999 18:09:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme1", 00:14:24.999 "trtype": "tcp", 00:14:24.999 "traddr": "10.0.0.2", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "4420", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.999 "hdgst": false, 00:14:24.999 "ddgst": false 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 }' 00:14:24.999 18:09:22 -- nvmf/common.sh@544 -- # jq . 00:14:24.999 18:09:22 -- nvmf/common.sh@545 -- # IFS=, 00:14:24.999 18:09:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme1", 00:14:24.999 "trtype": "tcp", 00:14:24.999 "traddr": "10.0.0.2", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "4420", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.999 "hdgst": false, 00:14:24.999 "ddgst": false 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 }' 00:14:24.999 18:09:22 -- nvmf/common.sh@544 -- # jq . 00:14:24.999 18:09:22 -- nvmf/common.sh@545 -- # IFS=, 00:14:24.999 18:09:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:24.999 "params": { 00:14:24.999 "name": "Nvme1", 00:14:24.999 "trtype": "tcp", 00:14:24.999 "traddr": "10.0.0.2", 00:14:24.999 "adrfam": "ipv4", 00:14:24.999 "trsvcid": "4420", 00:14:24.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.999 "hdgst": false, 00:14:24.999 "ddgst": false 00:14:24.999 }, 00:14:24.999 "method": "bdev_nvme_attach_controller" 00:14:24.999 }' 00:14:24.999 18:09:22 -- target/bdev_io_wait.sh@37 -- # wait 72644 00:14:24.999 [2024-04-25 18:09:22.928236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:24.999 [2024-04-25 18:09:22.928338] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:25.258 [2024-04-25 18:09:22.938292] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:25.258 [2024-04-25 18:09:22.938572] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:25.258 [2024-04-25 18:09:22.953578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:25.258 [2024-04-25 18:09:22.953674] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:25.258 [2024-04-25 18:09:22.964953] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:25.258 [2024-04-25 18:09:22.965069] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:25.258 [2024-04-25 18:09:23.131798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.517 [2024-04-25 18:09:23.197475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.517 [2024-04-25 18:09:23.236699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:25.517 [2024-04-25 18:09:23.274939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.517 [2024-04-25 18:09:23.292262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:25.517 [2024-04-25 18:09:23.351689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.517 Running I/O for 1 seconds... 00:14:25.517 [2024-04-25 18:09:23.385756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:25.517 Running I/O for 1 seconds... 00:14:25.775 [2024-04-25 18:09:23.451915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:25.775 Running I/O for 1 seconds... 00:14:25.775 Running I/O for 1 seconds... 00:14:26.712 00:14:26.712 Latency(us) 00:14:26.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.712 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:26.712 Nvme1n1 : 1.00 205312.97 802.00 0.00 0.00 621.06 237.38 1377.75 00:14:26.712 =================================================================================================================== 00:14:26.712 Total : 205312.97 802.00 0.00 0.00 621.06 237.38 1377.75 00:14:26.712 00:14:26.712 Latency(us) 00:14:26.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.712 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:26.712 Nvme1n1 : 1.03 6073.93 23.73 0.00 0.00 20825.94 7387.69 40274.85 00:14:26.712 =================================================================================================================== 00:14:26.712 Total : 6073.93 23.73 0.00 0.00 20825.94 7387.69 40274.85 00:14:26.712 00:14:26.712 Latency(us) 00:14:26.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.712 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:26.712 Nvme1n1 : 1.01 5677.83 22.18 0.00 0.00 22421.70 10545.34 46232.67 00:14:26.712 =================================================================================================================== 00:14:26.712 Total : 5677.83 22.18 0.00 0.00 22421.70 10545.34 46232.67 00:14:26.712 00:14:26.712 Latency(us) 00:14:26.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.712 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:26.712 Nvme1n1 : 1.01 7471.43 29.19 0.00 0.00 17038.80 8102.63 26691.03 00:14:26.712 =================================================================================================================== 00:14:26.712 Total : 7471.43 29.19 0.00 0.00 17038.80 8102.63 26691.03 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@38 -- # wait 72646 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@39 -- # wait 72648 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@40 -- # wait 72650 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.280 18:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.280 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:14:27.280 18:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:27.280 18:09:25 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:27.280 18:09:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:27.280 18:09:25 -- nvmf/common.sh@116 -- # sync 00:14:27.280 18:09:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:27.280 18:09:25 -- nvmf/common.sh@119 -- # set +e 00:14:27.280 18:09:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:27.280 18:09:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:27.280 rmmod nvme_tcp 00:14:27.280 rmmod nvme_fabrics 00:14:27.280 rmmod nvme_keyring 00:14:27.280 18:09:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:27.280 18:09:25 -- nvmf/common.sh@123 -- # set -e 00:14:27.280 18:09:25 -- nvmf/common.sh@124 -- # return 0 00:14:27.280 18:09:25 -- nvmf/common.sh@477 -- # '[' -n 72591 ']' 00:14:27.280 18:09:25 -- nvmf/common.sh@478 -- # killprocess 72591 00:14:27.280 18:09:25 -- common/autotest_common.sh@926 -- # '[' -z 72591 ']' 00:14:27.280 18:09:25 -- common/autotest_common.sh@930 -- # kill -0 72591 00:14:27.280 18:09:25 -- common/autotest_common.sh@931 -- # uname 00:14:27.280 18:09:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:27.280 18:09:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72591 00:14:27.280 killing process with pid 72591 00:14:27.280 18:09:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:27.280 18:09:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:27.280 18:09:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72591' 00:14:27.280 18:09:25 -- common/autotest_common.sh@945 -- # kill 72591 00:14:27.280 18:09:25 -- common/autotest_common.sh@950 -- # wait 72591 00:14:27.849 18:09:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.849 18:09:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.849 18:09:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.849 18:09:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.849 18:09:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.849 18:09:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.849 18:09:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.849 18:09:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.849 18:09:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:27.849 00:14:27.849 real 0m4.365s 00:14:27.849 user 0m19.727s 00:14:27.849 sys 0m1.998s 00:14:27.849 18:09:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.849 ************************************ 00:14:27.849 END TEST nvmf_bdev_io_wait 00:14:27.849 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:14:27.849 ************************************ 00:14:27.849 18:09:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:27.849 18:09:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:27.849 18:09:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.849 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:14:27.849 ************************************ 00:14:27.849 START TEST nvmf_queue_depth 00:14:27.849 ************************************ 00:14:27.849 18:09:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:27.849 * Looking for test storage... 00:14:27.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:27.849 18:09:25 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.849 18:09:25 -- nvmf/common.sh@7 -- # uname -s 00:14:27.849 18:09:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.849 18:09:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.849 18:09:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.849 18:09:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.849 18:09:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.849 18:09:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.849 18:09:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.849 18:09:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.849 18:09:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.849 18:09:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.849 18:09:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:27.849 18:09:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:27.849 18:09:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.849 18:09:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.849 18:09:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.849 18:09:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.849 18:09:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.849 18:09:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.849 18:09:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.849 18:09:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.849 18:09:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.849 18:09:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.850 18:09:25 -- paths/export.sh@5 -- # export PATH 00:14:27.850 18:09:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.850 18:09:25 -- nvmf/common.sh@46 -- # : 0 00:14:27.850 18:09:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:27.850 18:09:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:27.850 18:09:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:27.850 18:09:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.850 18:09:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.850 18:09:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:27.850 18:09:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:27.850 18:09:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:27.850 18:09:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:27.850 18:09:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:27.850 18:09:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.850 18:09:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:27.850 18:09:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:27.850 18:09:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.850 18:09:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:27.850 18:09:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:27.850 18:09:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:27.850 18:09:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.850 18:09:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.850 18:09:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.850 18:09:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:27.850 18:09:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:27.850 18:09:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:27.850 18:09:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:27.850 18:09:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:27.850 18:09:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:27.850 18:09:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.850 18:09:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.850 18:09:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.850 18:09:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:27.850 18:09:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.850 18:09:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.850 18:09:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.850 18:09:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.850 18:09:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.850 18:09:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.850 18:09:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.850 18:09:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.850 18:09:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:27.850 18:09:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:27.850 Cannot find device "nvmf_tgt_br" 00:14:27.850 18:09:25 -- nvmf/common.sh@154 -- # true 00:14:27.850 18:09:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.850 Cannot find device "nvmf_tgt_br2" 00:14:27.850 18:09:25 -- nvmf/common.sh@155 -- # true 00:14:27.850 18:09:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:27.850 18:09:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:27.850 Cannot find device "nvmf_tgt_br" 00:14:27.850 18:09:25 -- nvmf/common.sh@157 -- # true 00:14:27.850 18:09:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:27.850 Cannot find device "nvmf_tgt_br2" 00:14:27.850 18:09:25 -- nvmf/common.sh@158 -- # true 00:14:27.850 18:09:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:27.850 18:09:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:28.109 18:09:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.109 18:09:25 -- nvmf/common.sh@161 -- # true 00:14:28.109 18:09:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.109 18:09:25 -- nvmf/common.sh@162 -- # true 00:14:28.109 18:09:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.109 18:09:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.109 18:09:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.109 18:09:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.109 18:09:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.109 18:09:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.109 18:09:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.109 18:09:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.109 18:09:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.109 18:09:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:28.109 18:09:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:28.109 18:09:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:28.109 18:09:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:28.109 18:09:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.109 18:09:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.109 18:09:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.109 18:09:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:28.109 18:09:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:28.109 18:09:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.109 18:09:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.109 18:09:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.109 18:09:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.109 18:09:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.109 18:09:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:28.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:14:28.109 00:14:28.109 --- 10.0.0.2 ping statistics --- 00:14:28.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.109 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:28.109 18:09:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:28.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:14:28.109 00:14:28.109 --- 10.0.0.3 ping statistics --- 00:14:28.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.110 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:28.110 18:09:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:28.110 00:14:28.110 --- 10.0.0.1 ping statistics --- 00:14:28.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.110 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:28.110 18:09:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.110 18:09:25 -- nvmf/common.sh@421 -- # return 0 00:14:28.110 18:09:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:28.110 18:09:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.110 18:09:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:28.110 18:09:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:28.110 18:09:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.110 18:09:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:28.110 18:09:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:28.110 18:09:26 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:28.110 18:09:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:28.110 18:09:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:28.110 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:14:28.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.110 18:09:26 -- nvmf/common.sh@469 -- # nvmfpid=72886 00:14:28.110 18:09:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.110 18:09:26 -- nvmf/common.sh@470 -- # waitforlisten 72886 00:14:28.110 18:09:26 -- common/autotest_common.sh@819 -- # '[' -z 72886 ']' 00:14:28.110 18:09:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.110 18:09:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.110 18:09:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.110 18:09:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.110 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:14:28.368 [2024-04-25 18:09:26.069018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:28.368 [2024-04-25 18:09:26.069144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.368 [2024-04-25 18:09:26.208729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.627 [2024-04-25 18:09:26.319760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.627 [2024-04-25 18:09:26.319911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.627 [2024-04-25 18:09:26.319925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.627 [2024-04-25 18:09:26.319936] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.627 [2024-04-25 18:09:26.319968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.195 18:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:29.195 18:09:27 -- common/autotest_common.sh@852 -- # return 0 00:14:29.195 18:09:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:29.195 18:09:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:29.195 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.195 18:09:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.195 18:09:27 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.195 18:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.195 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.195 [2024-04-25 18:09:27.101335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.195 18:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.195 18:09:27 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.195 18:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.195 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.454 Malloc0 00:14:29.454 18:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.454 18:09:27 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:29.454 18:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.454 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.454 18:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.454 18:09:27 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.454 18:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.454 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.454 18:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.454 18:09:27 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.454 18:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.454 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.454 [2024-04-25 18:09:27.158185] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.454 18:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.454 18:09:27 -- target/queue_depth.sh@30 -- # bdevperf_pid=72936 00:14:29.454 18:09:27 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:29.454 18:09:27 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.454 18:09:27 -- target/queue_depth.sh@33 -- # waitforlisten 72936 /var/tmp/bdevperf.sock 00:14:29.454 18:09:27 -- common/autotest_common.sh@819 -- # '[' -z 72936 ']' 00:14:29.454 18:09:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.454 18:09:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.454 18:09:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.454 18:09:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.454 18:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.454 [2024-04-25 18:09:27.219826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:29.454 [2024-04-25 18:09:27.220198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72936 ] 00:14:29.454 [2024-04-25 18:09:27.361020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.712 [2024-04-25 18:09:27.488038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.322 18:09:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.322 18:09:28 -- common/autotest_common.sh@852 -- # return 0 00:14:30.322 18:09:28 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:30.322 18:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.322 18:09:28 -- common/autotest_common.sh@10 -- # set +x 00:14:30.581 NVMe0n1 00:14:30.581 18:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.581 18:09:28 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:30.581 Running I/O for 10 seconds... 00:14:40.552 00:14:40.552 Latency(us) 00:14:40.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:40.552 Verification LBA range: start 0x0 length 0x4000 00:14:40.552 NVMe0n1 : 10.06 14848.46 58.00 0.00 0.00 68725.72 12213.53 53143.74 00:14:40.552 =================================================================================================================== 00:14:40.552 Total : 14848.46 58.00 0.00 0.00 68725.72 12213.53 53143.74 00:14:40.552 0 00:14:40.552 18:09:38 -- target/queue_depth.sh@39 -- # killprocess 72936 00:14:40.552 18:09:38 -- common/autotest_common.sh@926 -- # '[' -z 72936 ']' 00:14:40.552 18:09:38 -- common/autotest_common.sh@930 -- # kill -0 72936 00:14:40.811 18:09:38 -- common/autotest_common.sh@931 -- # uname 00:14:40.811 18:09:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.811 18:09:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72936 00:14:40.811 18:09:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.811 killing process with pid 72936 00:14:40.811 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.811 00:14:40.811 Latency(us) 00:14:40.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.811 =================================================================================================================== 00:14:40.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.811 18:09:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.811 18:09:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72936' 00:14:40.811 18:09:38 -- common/autotest_common.sh@945 -- # kill 72936 00:14:40.811 18:09:38 -- common/autotest_common.sh@950 -- # wait 72936 00:14:41.068 18:09:38 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:41.068 18:09:38 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:41.068 18:09:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.068 18:09:38 -- nvmf/common.sh@116 -- # sync 00:14:41.068 18:09:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.069 18:09:38 -- nvmf/common.sh@119 -- # set +e 00:14:41.069 18:09:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.069 18:09:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.069 rmmod nvme_tcp 00:14:41.069 rmmod nvme_fabrics 00:14:41.069 rmmod nvme_keyring 00:14:41.069 18:09:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.069 18:09:38 -- nvmf/common.sh@123 -- # set -e 00:14:41.069 18:09:38 -- nvmf/common.sh@124 -- # return 0 00:14:41.069 18:09:38 -- nvmf/common.sh@477 -- # '[' -n 72886 ']' 00:14:41.069 18:09:38 -- nvmf/common.sh@478 -- # killprocess 72886 00:14:41.069 18:09:38 -- common/autotest_common.sh@926 -- # '[' -z 72886 ']' 00:14:41.069 18:09:38 -- common/autotest_common.sh@930 -- # kill -0 72886 00:14:41.069 18:09:38 -- common/autotest_common.sh@931 -- # uname 00:14:41.069 18:09:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.069 18:09:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72886 00:14:41.069 killing process with pid 72886 00:14:41.069 18:09:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.069 18:09:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.069 18:09:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72886' 00:14:41.069 18:09:38 -- common/autotest_common.sh@945 -- # kill 72886 00:14:41.069 18:09:38 -- common/autotest_common.sh@950 -- # wait 72886 00:14:41.636 18:09:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.636 18:09:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.636 18:09:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.636 18:09:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.636 18:09:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.636 18:09:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.636 18:09:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:41.636 00:14:41.636 real 0m13.715s 00:14:41.636 user 0m23.048s 00:14:41.636 sys 0m2.355s 00:14:41.636 18:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.636 ************************************ 00:14:41.636 END TEST nvmf_queue_depth 00:14:41.636 ************************************ 00:14:41.636 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:41.636 18:09:39 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:41.636 18:09:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.636 18:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.636 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:41.636 ************************************ 00:14:41.636 START TEST nvmf_multipath 00:14:41.636 ************************************ 00:14:41.636 18:09:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:41.636 * Looking for test storage... 00:14:41.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.636 18:09:39 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.636 18:09:39 -- nvmf/common.sh@7 -- # uname -s 00:14:41.636 18:09:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.636 18:09:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.636 18:09:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.636 18:09:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.636 18:09:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.636 18:09:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.636 18:09:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.636 18:09:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.636 18:09:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.636 18:09:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:41.636 18:09:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:14:41.636 18:09:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.636 18:09:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.636 18:09:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.636 18:09:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.636 18:09:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.636 18:09:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.636 18:09:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.636 18:09:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.636 18:09:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.636 18:09:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.636 18:09:39 -- paths/export.sh@5 -- # export PATH 00:14:41.636 18:09:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.636 18:09:39 -- nvmf/common.sh@46 -- # : 0 00:14:41.636 18:09:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:41.636 18:09:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:41.636 18:09:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:41.636 18:09:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.636 18:09:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.636 18:09:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:41.636 18:09:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:41.636 18:09:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:41.636 18:09:39 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.636 18:09:39 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.636 18:09:39 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:41.636 18:09:39 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.636 18:09:39 -- target/multipath.sh@43 -- # nvmftestinit 00:14:41.636 18:09:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:41.636 18:09:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.636 18:09:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:41.636 18:09:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:41.636 18:09:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:41.636 18:09:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.636 18:09:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.636 18:09:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.636 18:09:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:41.636 18:09:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:41.636 18:09:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.636 18:09:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.636 18:09:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.636 18:09:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:41.636 18:09:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.636 18:09:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.636 18:09:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.636 18:09:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.636 18:09:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.636 18:09:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.636 18:09:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.637 18:09:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.637 18:09:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:41.637 18:09:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:41.637 Cannot find device "nvmf_tgt_br" 00:14:41.637 18:09:39 -- nvmf/common.sh@154 -- # true 00:14:41.637 18:09:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.637 Cannot find device "nvmf_tgt_br2" 00:14:41.637 18:09:39 -- nvmf/common.sh@155 -- # true 00:14:41.637 18:09:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:41.637 18:09:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:41.637 Cannot find device "nvmf_tgt_br" 00:14:41.637 18:09:39 -- nvmf/common.sh@157 -- # true 00:14:41.637 18:09:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:41.637 Cannot find device "nvmf_tgt_br2" 00:14:41.637 18:09:39 -- nvmf/common.sh@158 -- # true 00:14:41.637 18:09:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:41.637 18:09:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:41.895 18:09:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.895 18:09:39 -- nvmf/common.sh@161 -- # true 00:14:41.895 18:09:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.895 18:09:39 -- nvmf/common.sh@162 -- # true 00:14:41.895 18:09:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.895 18:09:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.895 18:09:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.895 18:09:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.895 18:09:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.895 18:09:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.895 18:09:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.895 18:09:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:41.895 18:09:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:41.895 18:09:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:41.895 18:09:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:41.895 18:09:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:41.895 18:09:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:41.895 18:09:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.895 18:09:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.895 18:09:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.895 18:09:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:41.895 18:09:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:41.895 18:09:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.895 18:09:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.895 18:09:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.895 18:09:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.895 18:09:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.895 18:09:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:41.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:14:41.895 00:14:41.895 --- 10.0.0.2 ping statistics --- 00:14:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.895 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:41.895 18:09:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:41.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:41.895 00:14:41.895 --- 10.0.0.3 ping statistics --- 00:14:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.895 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:41.895 18:09:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:14:41.895 00:14:41.895 --- 10.0.0.1 ping statistics --- 00:14:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.895 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:41.895 18:09:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.895 18:09:39 -- nvmf/common.sh@421 -- # return 0 00:14:41.895 18:09:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:41.895 18:09:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.895 18:09:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:41.895 18:09:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:41.895 18:09:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.895 18:09:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:41.895 18:09:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:41.895 18:09:39 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:41.895 18:09:39 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:41.895 18:09:39 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:41.895 18:09:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:41.895 18:09:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:41.895 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:41.895 18:09:39 -- nvmf/common.sh@469 -- # nvmfpid=73273 00:14:41.895 18:09:39 -- nvmf/common.sh@470 -- # waitforlisten 73273 00:14:41.895 18:09:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.895 18:09:39 -- common/autotest_common.sh@819 -- # '[' -z 73273 ']' 00:14:41.895 18:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.895 18:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.895 18:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.896 18:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.896 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:42.154 [2024-04-25 18:09:39.871191] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:42.154 [2024-04-25 18:09:39.871319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.154 [2024-04-25 18:09:40.013944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.453 [2024-04-25 18:09:40.145236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.453 [2024-04-25 18:09:40.146469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.453 [2024-04-25 18:09:40.146670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.453 [2024-04-25 18:09:40.146693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.453 [2024-04-25 18:09:40.146836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.453 [2024-04-25 18:09:40.147176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.453 [2024-04-25 18:09:40.147324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.453 [2024-04-25 18:09:40.147332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.020 18:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:43.020 18:09:40 -- common/autotest_common.sh@852 -- # return 0 00:14:43.020 18:09:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:43.020 18:09:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:43.020 18:09:40 -- common/autotest_common.sh@10 -- # set +x 00:14:43.020 18:09:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.020 18:09:40 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.278 [2024-04-25 18:09:41.143988] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.278 18:09:41 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:43.844 Malloc0 00:14:43.844 18:09:41 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:43.844 18:09:41 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:44.411 18:09:42 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.411 [2024-04-25 18:09:42.327477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.670 18:09:42 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:44.670 [2024-04-25 18:09:42.563884] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.670 18:09:42 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:44.929 18:09:42 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:45.187 18:09:43 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.187 18:09:43 -- common/autotest_common.sh@1177 -- # local i=0 00:14:45.187 18:09:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.187 18:09:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:45.187 18:09:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:47.091 18:09:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:47.091 18:09:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:47.091 18:09:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.410 18:09:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:47.410 18:09:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.410 18:09:45 -- common/autotest_common.sh@1187 -- # return 0 00:14:47.410 18:09:45 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:47.410 18:09:45 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:47.410 18:09:45 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:47.410 18:09:45 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:47.410 18:09:45 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:47.410 18:09:45 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:47.410 18:09:45 -- target/multipath.sh@38 -- # return 0 00:14:47.410 18:09:45 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:47.410 18:09:45 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:47.410 18:09:45 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:47.410 18:09:45 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:47.410 18:09:45 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:47.410 18:09:45 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:47.410 18:09:45 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:47.410 18:09:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:47.410 18:09:45 -- target/multipath.sh@22 -- # local timeout=20 00:14:47.410 18:09:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:47.410 18:09:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:47.410 18:09:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:47.410 18:09:45 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:47.410 18:09:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:47.410 18:09:45 -- target/multipath.sh@22 -- # local timeout=20 00:14:47.410 18:09:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:47.410 18:09:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:47.410 18:09:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:47.410 18:09:45 -- target/multipath.sh@85 -- # echo numa 00:14:47.410 18:09:45 -- target/multipath.sh@88 -- # fio_pid=73411 00:14:47.410 18:09:45 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:47.410 18:09:45 -- target/multipath.sh@90 -- # sleep 1 00:14:47.410 [global] 00:14:47.410 thread=1 00:14:47.410 invalidate=1 00:14:47.410 rw=randrw 00:14:47.410 time_based=1 00:14:47.410 runtime=6 00:14:47.410 ioengine=libaio 00:14:47.410 direct=1 00:14:47.410 bs=4096 00:14:47.410 iodepth=128 00:14:47.410 norandommap=0 00:14:47.410 numjobs=1 00:14:47.410 00:14:47.410 verify_dump=1 00:14:47.410 verify_backlog=512 00:14:47.410 verify_state_save=0 00:14:47.410 do_verify=1 00:14:47.410 verify=crc32c-intel 00:14:47.410 [job0] 00:14:47.410 filename=/dev/nvme0n1 00:14:47.410 Could not set queue depth (nvme0n1) 00:14:47.410 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:47.410 fio-3.35 00:14:47.410 Starting 1 thread 00:14:48.343 18:09:46 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:48.601 18:09:46 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:48.859 18:09:46 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:48.859 18:09:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:48.859 18:09:46 -- target/multipath.sh@22 -- # local timeout=20 00:14:48.859 18:09:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:48.859 18:09:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:48.859 18:09:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:48.859 18:09:46 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:48.859 18:09:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:48.859 18:09:46 -- target/multipath.sh@22 -- # local timeout=20 00:14:48.859 18:09:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:48.859 18:09:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:48.859 18:09:46 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:48.859 18:09:46 -- target/multipath.sh@25 -- # sleep 1s 00:14:49.794 18:09:47 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:49.794 18:09:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:49.794 18:09:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:49.794 18:09:47 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:50.053 18:09:47 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:50.311 18:09:48 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:50.311 18:09:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:50.312 18:09:48 -- target/multipath.sh@22 -- # local timeout=20 00:14:50.312 18:09:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:50.312 18:09:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:50.312 18:09:48 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:50.312 18:09:48 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:50.312 18:09:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:50.312 18:09:48 -- target/multipath.sh@22 -- # local timeout=20 00:14:50.312 18:09:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:50.312 18:09:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:50.312 18:09:48 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:50.312 18:09:48 -- target/multipath.sh@25 -- # sleep 1s 00:14:51.272 18:09:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:51.272 18:09:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:51.272 18:09:49 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:51.272 18:09:49 -- target/multipath.sh@104 -- # wait 73411 00:14:53.802 00:14:53.802 job0: (groupid=0, jobs=1): err= 0: pid=73432: Thu Apr 25 18:09:51 2024 00:14:53.802 read: IOPS=11.3k, BW=44.1MiB/s (46.2MB/s)(265MiB/6005msec) 00:14:53.802 slat (usec): min=6, max=5170, avg=51.49, stdev=230.02 00:14:53.802 clat (usec): min=1252, max=13857, avg=7736.05, stdev=1170.86 00:14:53.802 lat (usec): min=1495, max=13868, avg=7787.54, stdev=1180.29 00:14:53.802 clat percentiles (usec): 00:14:53.802 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 6915], 00:14:53.802 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7963], 00:14:53.802 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9634], 00:14:53.802 | 99.00th=[11338], 99.50th=[11731], 99.90th=[12518], 99.95th=[13173], 00:14:53.802 | 99.99th=[13829] 00:14:53.802 bw ( KiB/s): min= 9080, max=30664, per=52.21%, avg=23563.64, stdev=7192.19, samples=11 00:14:53.802 iops : min= 2270, max= 7666, avg=5890.91, stdev=1798.05, samples=11 00:14:53.802 write: IOPS=6730, BW=26.3MiB/s (27.6MB/s)(139MiB/5273msec); 0 zone resets 00:14:53.802 slat (usec): min=14, max=2852, avg=61.31, stdev=158.12 00:14:53.802 clat (usec): min=1994, max=13602, avg=6695.85, stdev=948.22 00:14:53.802 lat (usec): min=2035, max=13636, avg=6757.16, stdev=952.12 00:14:53.802 clat percentiles (usec): 00:14:53.802 | 1.00th=[ 3752], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[ 6128], 00:14:53.802 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6915], 00:14:53.802 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7898], 00:14:53.802 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[11863], 99.95th=[12125], 00:14:53.802 | 99.99th=[13042] 00:14:53.802 bw ( KiB/s): min= 9208, max=30120, per=87.73%, avg=23618.18, stdev=6937.35, samples=11 00:14:53.802 iops : min= 2302, max= 7530, avg=5904.55, stdev=1734.34, samples=11 00:14:53.802 lat (msec) : 2=0.01%, 4=0.79%, 10=96.37%, 20=2.84% 00:14:53.802 cpu : usr=5.75%, sys=21.99%, ctx=6286, majf=0, minf=121 00:14:53.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:53.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.802 issued rwts: total=67747,35488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.802 00:14:53.802 Run status group 0 (all jobs): 00:14:53.802 READ: bw=44.1MiB/s (46.2MB/s), 44.1MiB/s-44.1MiB/s (46.2MB/s-46.2MB/s), io=265MiB (277MB), run=6005-6005msec 00:14:53.802 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=139MiB (145MB), run=5273-5273msec 00:14:53.802 00:14:53.802 Disk stats (read/write): 00:14:53.802 nvme0n1: ios=66738/34828, merge=0/0, ticks=483818/218077, in_queue=701895, util=98.61% 00:14:53.802 18:09:51 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:53.802 18:09:51 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:54.060 18:09:51 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:54.060 18:09:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:54.060 18:09:51 -- target/multipath.sh@22 -- # local timeout=20 00:14:54.060 18:09:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:54.060 18:09:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:54.060 18:09:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:54.060 18:09:51 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:54.060 18:09:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:54.060 18:09:51 -- target/multipath.sh@22 -- # local timeout=20 00:14:54.060 18:09:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:54.060 18:09:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:54.060 18:09:51 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:14:54.060 18:09:51 -- target/multipath.sh@25 -- # sleep 1s 00:14:54.996 18:09:52 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:54.996 18:09:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:54.996 18:09:52 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:54.996 18:09:52 -- target/multipath.sh@113 -- # echo round-robin 00:14:54.996 18:09:52 -- target/multipath.sh@116 -- # fio_pid=73562 00:14:54.996 18:09:52 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:54.996 18:09:52 -- target/multipath.sh@118 -- # sleep 1 00:14:54.996 [global] 00:14:54.996 thread=1 00:14:54.996 invalidate=1 00:14:54.996 rw=randrw 00:14:54.996 time_based=1 00:14:54.996 runtime=6 00:14:54.996 ioengine=libaio 00:14:54.996 direct=1 00:14:54.996 bs=4096 00:14:54.996 iodepth=128 00:14:54.996 norandommap=0 00:14:54.996 numjobs=1 00:14:54.996 00:14:54.996 verify_dump=1 00:14:54.996 verify_backlog=512 00:14:54.996 verify_state_save=0 00:14:54.996 do_verify=1 00:14:54.996 verify=crc32c-intel 00:14:54.996 [job0] 00:14:54.996 filename=/dev/nvme0n1 00:14:54.996 Could not set queue depth (nvme0n1) 00:14:55.254 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:55.254 fio-3.35 00:14:55.254 Starting 1 thread 00:14:56.187 18:09:53 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:56.445 18:09:54 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:56.445 18:09:54 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:56.445 18:09:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:56.445 18:09:54 -- target/multipath.sh@22 -- # local timeout=20 00:14:56.445 18:09:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:56.445 18:09:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:56.445 18:09:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:56.445 18:09:54 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:56.445 18:09:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:56.445 18:09:54 -- target/multipath.sh@22 -- # local timeout=20 00:14:56.445 18:09:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:56.445 18:09:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:56.445 18:09:54 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:56.445 18:09:54 -- target/multipath.sh@25 -- # sleep 1s 00:14:57.822 18:09:55 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:57.822 18:09:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:57.822 18:09:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:57.822 18:09:55 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:57.822 18:09:55 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:58.080 18:09:55 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:58.080 18:09:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:58.080 18:09:55 -- target/multipath.sh@22 -- # local timeout=20 00:14:58.080 18:09:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:58.080 18:09:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:58.080 18:09:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:58.080 18:09:55 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:58.080 18:09:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:58.080 18:09:55 -- target/multipath.sh@22 -- # local timeout=20 00:14:58.080 18:09:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:58.080 18:09:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:58.080 18:09:55 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:58.080 18:09:55 -- target/multipath.sh@25 -- # sleep 1s 00:14:59.016 18:09:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:59.016 18:09:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:59.016 18:09:56 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:59.016 18:09:56 -- target/multipath.sh@132 -- # wait 73562 00:15:01.623 00:15:01.623 job0: (groupid=0, jobs=1): err= 0: pid=73583: Thu Apr 25 18:09:59 2024 00:15:01.623 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(285MiB/6003msec) 00:15:01.623 slat (usec): min=4, max=5847, avg=40.60, stdev=192.66 00:15:01.623 clat (usec): min=550, max=17100, avg=7219.08, stdev=1600.30 00:15:01.623 lat (usec): min=564, max=17111, avg=7259.68, stdev=1606.92 00:15:01.623 clat percentiles (usec): 00:15:01.623 | 1.00th=[ 3064], 5.00th=[ 4424], 10.00th=[ 5342], 20.00th=[ 6259], 00:15:01.623 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7439], 00:15:01.623 | 70.00th=[ 7832], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[10028], 00:15:01.623 | 99.00th=[11731], 99.50th=[12387], 99.90th=[14091], 99.95th=[14746], 00:15:01.623 | 99.99th=[15926] 00:15:01.623 bw ( KiB/s): min=12904, max=33896, per=53.44%, avg=26011.64, stdev=7814.54, samples=11 00:15:01.623 iops : min= 3226, max= 8474, avg=6502.91, stdev=1953.63, samples=11 00:15:01.623 write: IOPS=7410, BW=28.9MiB/s (30.4MB/s)(152MiB/5249msec); 0 zone resets 00:15:01.623 slat (usec): min=15, max=2383, avg=52.19, stdev=126.77 00:15:01.623 clat (usec): min=536, max=13516, avg=6032.49, stdev=1427.58 00:15:01.623 lat (usec): min=563, max=13541, avg=6084.68, stdev=1433.08 00:15:01.623 clat percentiles (usec): 00:15:01.623 | 1.00th=[ 2376], 5.00th=[ 3294], 10.00th=[ 3884], 20.00th=[ 5080], 00:15:01.623 | 30.00th=[ 5669], 40.00th=[ 5997], 50.00th=[ 6259], 60.00th=[ 6456], 00:15:01.623 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7439], 95.00th=[ 8029], 00:15:01.623 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[11600], 99.95th=[11863], 00:15:01.623 | 99.99th=[13304] 00:15:01.623 bw ( KiB/s): min=13256, max=34864, per=87.67%, avg=25989.82, stdev=7537.03, samples=11 00:15:01.623 iops : min= 3314, max= 8716, avg=6497.45, stdev=1884.26, samples=11 00:15:01.623 lat (usec) : 750=0.01%, 1000=0.01% 00:15:01.623 lat (msec) : 2=0.24%, 4=5.77%, 10=90.34%, 20=3.63% 00:15:01.623 cpu : usr=5.23%, sys=25.54%, ctx=6964, majf=0, minf=169 00:15:01.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:01.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.623 issued rwts: total=73043,38900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.623 00:15:01.623 Run status group 0 (all jobs): 00:15:01.623 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=285MiB (299MB), run=6003-6003msec 00:15:01.623 WRITE: bw=28.9MiB/s (30.4MB/s), 28.9MiB/s-28.9MiB/s (30.4MB/s-30.4MB/s), io=152MiB (159MB), run=5249-5249msec 00:15:01.623 00:15:01.623 Disk stats (read/write): 00:15:01.623 nvme0n1: ios=71788/38388, merge=0/0, ticks=484038/215342, in_queue=699380, util=98.62% 00:15:01.623 18:09:59 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:01.623 18:09:59 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.623 18:09:59 -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.623 18:09:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:01.623 18:09:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.623 18:09:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:01.623 18:09:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.623 18:09:59 -- common/autotest_common.sh@1210 -- # return 0 00:15:01.623 18:09:59 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.882 18:09:59 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:01.882 18:09:59 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:01.882 18:09:59 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:01.882 18:09:59 -- target/multipath.sh@144 -- # nvmftestfini 00:15:01.882 18:09:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.882 18:09:59 -- nvmf/common.sh@116 -- # sync 00:15:01.882 18:09:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.882 18:09:59 -- nvmf/common.sh@119 -- # set +e 00:15:01.882 18:09:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.882 18:09:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.882 rmmod nvme_tcp 00:15:01.882 rmmod nvme_fabrics 00:15:01.882 rmmod nvme_keyring 00:15:01.882 18:09:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.882 18:09:59 -- nvmf/common.sh@123 -- # set -e 00:15:01.882 18:09:59 -- nvmf/common.sh@124 -- # return 0 00:15:01.882 18:09:59 -- nvmf/common.sh@477 -- # '[' -n 73273 ']' 00:15:01.882 18:09:59 -- nvmf/common.sh@478 -- # killprocess 73273 00:15:01.882 18:09:59 -- common/autotest_common.sh@926 -- # '[' -z 73273 ']' 00:15:01.882 18:09:59 -- common/autotest_common.sh@930 -- # kill -0 73273 00:15:01.882 18:09:59 -- common/autotest_common.sh@931 -- # uname 00:15:02.141 18:09:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.141 18:09:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73273 00:15:02.141 killing process with pid 73273 00:15:02.141 18:09:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.141 18:09:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.141 18:09:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73273' 00:15:02.141 18:09:59 -- common/autotest_common.sh@945 -- # kill 73273 00:15:02.141 18:09:59 -- common/autotest_common.sh@950 -- # wait 73273 00:15:02.400 18:10:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.400 18:10:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.400 18:10:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.400 18:10:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.400 18:10:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.400 18:10:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.401 18:10:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.401 18:10:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.401 18:10:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:02.401 00:15:02.401 real 0m20.939s 00:15:02.401 user 1m22.109s 00:15:02.401 sys 0m6.088s 00:15:02.401 18:10:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.401 18:10:00 -- common/autotest_common.sh@10 -- # set +x 00:15:02.401 ************************************ 00:15:02.401 END TEST nvmf_multipath 00:15:02.401 ************************************ 00:15:02.401 18:10:00 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:02.401 18:10:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:02.401 18:10:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:02.401 18:10:00 -- common/autotest_common.sh@10 -- # set +x 00:15:02.660 ************************************ 00:15:02.660 START TEST nvmf_zcopy 00:15:02.660 ************************************ 00:15:02.660 18:10:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:02.660 * Looking for test storage... 00:15:02.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:02.660 18:10:00 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.660 18:10:00 -- nvmf/common.sh@7 -- # uname -s 00:15:02.660 18:10:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.660 18:10:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.660 18:10:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.660 18:10:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.660 18:10:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.660 18:10:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.660 18:10:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.660 18:10:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.660 18:10:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.660 18:10:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:02.660 18:10:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:02.660 18:10:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.660 18:10:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.660 18:10:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.660 18:10:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.660 18:10:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.660 18:10:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.660 18:10:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.660 18:10:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.660 18:10:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.660 18:10:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.660 18:10:00 -- paths/export.sh@5 -- # export PATH 00:15:02.660 18:10:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.660 18:10:00 -- nvmf/common.sh@46 -- # : 0 00:15:02.660 18:10:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.660 18:10:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.660 18:10:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.660 18:10:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.660 18:10:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.660 18:10:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.660 18:10:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.660 18:10:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.660 18:10:00 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:02.660 18:10:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:02.660 18:10:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.660 18:10:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.660 18:10:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.660 18:10:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.660 18:10:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.660 18:10:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.660 18:10:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.660 18:10:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:02.660 18:10:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:02.660 18:10:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.660 18:10:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.660 18:10:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.660 18:10:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:02.660 18:10:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.660 18:10:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.660 18:10:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.660 18:10:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.660 18:10:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.660 18:10:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.660 18:10:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.660 18:10:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.660 18:10:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:02.660 18:10:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:02.660 Cannot find device "nvmf_tgt_br" 00:15:02.660 18:10:00 -- nvmf/common.sh@154 -- # true 00:15:02.660 18:10:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.660 Cannot find device "nvmf_tgt_br2" 00:15:02.660 18:10:00 -- nvmf/common.sh@155 -- # true 00:15:02.660 18:10:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:02.660 18:10:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:02.660 Cannot find device "nvmf_tgt_br" 00:15:02.660 18:10:00 -- nvmf/common.sh@157 -- # true 00:15:02.660 18:10:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:02.660 Cannot find device "nvmf_tgt_br2" 00:15:02.660 18:10:00 -- nvmf/common.sh@158 -- # true 00:15:02.660 18:10:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:02.660 18:10:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:02.660 18:10:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.919 18:10:00 -- nvmf/common.sh@161 -- # true 00:15:02.919 18:10:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.920 18:10:00 -- nvmf/common.sh@162 -- # true 00:15:02.920 18:10:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.920 18:10:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.920 18:10:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.920 18:10:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.920 18:10:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.920 18:10:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.920 18:10:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.920 18:10:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.920 18:10:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.920 18:10:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:02.920 18:10:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:02.920 18:10:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:02.920 18:10:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:02.920 18:10:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.920 18:10:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.920 18:10:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.920 18:10:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:02.920 18:10:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:02.920 18:10:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.920 18:10:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.920 18:10:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.920 18:10:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.920 18:10:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.920 18:10:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:02.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:02.920 00:15:02.920 --- 10.0.0.2 ping statistics --- 00:15:02.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.920 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:02.920 18:10:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:02.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:02.920 00:15:02.920 --- 10.0.0.3 ping statistics --- 00:15:02.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.920 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:02.920 18:10:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:02.920 00:15:02.920 --- 10.0.0.1 ping statistics --- 00:15:02.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.920 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:02.920 18:10:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.920 18:10:00 -- nvmf/common.sh@421 -- # return 0 00:15:02.920 18:10:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.920 18:10:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.920 18:10:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.920 18:10:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.920 18:10:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.920 18:10:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.920 18:10:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.920 18:10:00 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:02.920 18:10:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.920 18:10:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.920 18:10:00 -- common/autotest_common.sh@10 -- # set +x 00:15:02.920 18:10:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.920 18:10:00 -- nvmf/common.sh@469 -- # nvmfpid=73861 00:15:02.920 18:10:00 -- nvmf/common.sh@470 -- # waitforlisten 73861 00:15:02.920 18:10:00 -- common/autotest_common.sh@819 -- # '[' -z 73861 ']' 00:15:02.920 18:10:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.920 18:10:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.920 18:10:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.920 18:10:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.920 18:10:00 -- common/autotest_common.sh@10 -- # set +x 00:15:03.178 [2024-04-25 18:10:00.869155] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:03.178 [2024-04-25 18:10:00.869246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.178 [2024-04-25 18:10:01.002429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.436 [2024-04-25 18:10:01.121535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:03.436 [2024-04-25 18:10:01.121679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.436 [2024-04-25 18:10:01.121693] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.436 [2024-04-25 18:10:01.121702] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.436 [2024-04-25 18:10:01.121735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.002 18:10:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.002 18:10:01 -- common/autotest_common.sh@852 -- # return 0 00:15:04.002 18:10:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:04.002 18:10:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:04.002 18:10:01 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 18:10:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.260 18:10:01 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:04.260 18:10:01 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:04.260 18:10:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:01 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 [2024-04-25 18:10:01.985215] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.260 18:10:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:01 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:04.260 18:10:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:01 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 18:10:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:01 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.260 18:10:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:01 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 [2024-04-25 18:10:02.001346] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.260 18:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:02 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.260 18:10:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:02 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 18:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:02 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:04.260 18:10:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:02 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 malloc0 00:15:04.260 18:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:02 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.260 18:10:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.260 18:10:02 -- common/autotest_common.sh@10 -- # set +x 00:15:04.260 18:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.260 18:10:02 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:04.260 18:10:02 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:04.260 18:10:02 -- nvmf/common.sh@520 -- # config=() 00:15:04.260 18:10:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.260 18:10:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.260 18:10:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.260 { 00:15:04.260 "params": { 00:15:04.260 "name": "Nvme$subsystem", 00:15:04.260 "trtype": "$TEST_TRANSPORT", 00:15:04.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.260 "adrfam": "ipv4", 00:15:04.260 "trsvcid": "$NVMF_PORT", 00:15:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.260 "hdgst": ${hdgst:-false}, 00:15:04.260 "ddgst": ${ddgst:-false} 00:15:04.260 }, 00:15:04.260 "method": "bdev_nvme_attach_controller" 00:15:04.260 } 00:15:04.260 EOF 00:15:04.260 )") 00:15:04.260 18:10:02 -- nvmf/common.sh@542 -- # cat 00:15:04.260 18:10:02 -- nvmf/common.sh@544 -- # jq . 00:15:04.260 18:10:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.260 18:10:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.260 "params": { 00:15:04.260 "name": "Nvme1", 00:15:04.260 "trtype": "tcp", 00:15:04.260 "traddr": "10.0.0.2", 00:15:04.260 "adrfam": "ipv4", 00:15:04.260 "trsvcid": "4420", 00:15:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.260 "hdgst": false, 00:15:04.260 "ddgst": false 00:15:04.260 }, 00:15:04.260 "method": "bdev_nvme_attach_controller" 00:15:04.260 }' 00:15:04.260 [2024-04-25 18:10:02.095662] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:04.260 [2024-04-25 18:10:02.095765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73918 ] 00:15:04.517 [2024-04-25 18:10:02.237743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.517 [2024-04-25 18:10:02.361696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.774 Running I/O for 10 seconds... 00:15:14.763 00:15:14.763 Latency(us) 00:15:14.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:14.763 Verification LBA range: start 0x0 length 0x1000 00:15:14.763 Nvme1n1 : 10.01 9238.69 72.18 0.00 0.00 13819.39 1355.40 22282.24 00:15:14.763 =================================================================================================================== 00:15:14.763 Total : 9238.69 72.18 0.00 0.00 13819.39 1355.40 22282.24 00:15:15.022 18:10:12 -- target/zcopy.sh@39 -- # perfpid=74029 00:15:15.022 18:10:12 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:15.022 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.022 18:10:12 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:15.022 18:10:12 -- nvmf/common.sh@520 -- # config=() 00:15:15.022 18:10:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:15.022 18:10:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:15.022 18:10:12 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:15.022 18:10:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:15.022 { 00:15:15.022 "params": { 00:15:15.022 "name": "Nvme$subsystem", 00:15:15.022 "trtype": "$TEST_TRANSPORT", 00:15:15.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.022 "adrfam": "ipv4", 00:15:15.022 "trsvcid": "$NVMF_PORT", 00:15:15.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.022 "hdgst": ${hdgst:-false}, 00:15:15.022 "ddgst": ${ddgst:-false} 00:15:15.022 }, 00:15:15.022 "method": "bdev_nvme_attach_controller" 00:15:15.022 } 00:15:15.022 EOF 00:15:15.022 )") 00:15:15.022 18:10:12 -- nvmf/common.sh@542 -- # cat 00:15:15.022 [2024-04-25 18:10:12.816328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.816371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 18:10:12 -- nvmf/common.sh@544 -- # jq . 00:15:15.022 18:10:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 18:10:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:15.022 "params": { 00:15:15.022 "name": "Nvme1", 00:15:15.022 "trtype": "tcp", 00:15:15.022 "traddr": "10.0.0.2", 00:15:15.022 "adrfam": "ipv4", 00:15:15.022 "trsvcid": "4420", 00:15:15.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.022 "hdgst": false, 00:15:15.022 "ddgst": false 00:15:15.022 }, 00:15:15.022 "method": "bdev_nvme_attach_controller" 00:15:15.022 }' 00:15:15.022 [2024-04-25 18:10:12.828289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.828319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.840280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.840324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.852287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.852312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.864288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.864314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 [2024-04-25 18:10:12.867512] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:15.022 [2024-04-25 18:10:12.867611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.876301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.876343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.888324] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.888354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.900313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.900335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.912331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.912356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.924346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.924370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.936341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.936368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.022 [2024-04-25 18:10:12.948346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.022 [2024-04-25 18:10:12.948373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.022 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.293 [2024-04-25 18:10:12.960352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.293 [2024-04-25 18:10:12.960378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.293 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.293 [2024-04-25 18:10:12.972330] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.293 [2024-04-25 18:10:12.972357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.293 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.293 [2024-04-25 18:10:12.984354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.293 [2024-04-25 18:10:12.984378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.293 2024/04/25 18:10:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.293 [2024-04-25 18:10:12.996357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:12.996379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.008366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.008395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 [2024-04-25 18:10:13.009191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.020383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.020416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.032358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.032383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.044368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.044393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.056374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.056396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.294 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.294 [2024-04-25 18:10:13.068383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.294 [2024-04-25 18:10:13.068405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.080401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.080439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.092402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.092426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.104405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.104441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.116406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.116455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.126081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.295 [2024-04-25 18:10:13.128425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.128449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.295 [2024-04-25 18:10:13.140442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.295 [2024-04-25 18:10:13.140474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.295 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.152451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.152488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.296 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.164453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.164491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.296 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.176456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.176488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.296 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.188448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.188477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.296 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.200461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.200493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.296 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.296 [2024-04-25 18:10:13.212456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.296 [2024-04-25 18:10:13.212490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.297 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.224441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.224466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.236529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.236570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.252508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.252538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.264576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.264602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.276576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.276606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.288586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.288613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 Running I/O for 5 seconds... 00:15:15.558 [2024-04-25 18:10:13.300581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.300607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.318065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.318100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.334152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.334186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.343132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.343179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.358753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.358783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.375474] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.375520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.391163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.391209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.407788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.407837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.424893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.424948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.442027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.442083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.459093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.459140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.558 [2024-04-25 18:10:13.475861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.558 [2024-04-25 18:10:13.475891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.558 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.493047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.493078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.508925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.508972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.525048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.525095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.542286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.542328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.560123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.560170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.575242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.575317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.586481] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.586512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.603569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.603617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.618795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.618825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.635718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.635768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.650374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.650404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.659047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.659092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.674543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.674574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.686522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.686553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.702840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.702870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.719616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.719661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:15.817 [2024-04-25 18:10:13.735349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.817 [2024-04-25 18:10:13.735379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.817 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.753635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.753668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.768138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.768170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.783913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.783943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.801743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.801773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.817376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.817407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.835492] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.835543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.850376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.850406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.867150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.867198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.881448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.881485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.896870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.896905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.908780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.908813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.925754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.925790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.943500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.943546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.960759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.960803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.977954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.977998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.076 [2024-04-25 18:10:13.995653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.076 [2024-04-25 18:10:13.995697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.076 2024/04/25 18:10:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.013443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.013485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.030068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.030112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.047643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.047688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.064564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.064608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.082992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.083035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.100385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.100427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.117860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.117903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.135026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.135061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.151122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.151157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.168579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.168616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.182787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.182822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.197378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.197413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.213427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.213462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.230443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.230478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.246887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.246922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.336 [2024-04-25 18:10:14.264480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.336 [2024-04-25 18:10:14.264517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.336 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.278867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.278902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.295216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.295251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.311803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.311837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.328379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.328414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.345436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.345596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.361222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.361259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.595 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.595 [2024-04-25 18:10:14.369966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.595 [2024-04-25 18:10:14.370002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.385596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.385631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.397023] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.397058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.413839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.413875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.430126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.430161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.447000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.447035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.463665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.463700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.480146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.480182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.497152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.497188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.596 [2024-04-25 18:10:14.514102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.596 [2024-04-25 18:10:14.514136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.596 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.529788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.529840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.547640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.547679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.562262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.562326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.579551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.579587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.593953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.594023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.610179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.610215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.626893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.626927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.644109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.644143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.660079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.660115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.678600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.678638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.856 [2024-04-25 18:10:14.693383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.856 [2024-04-25 18:10:14.693418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.856 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.857 [2024-04-25 18:10:14.709184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.857 [2024-04-25 18:10:14.709221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.857 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.857 [2024-04-25 18:10:14.727033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.857 [2024-04-25 18:10:14.727067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.857 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.857 [2024-04-25 18:10:14.741799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.857 [2024-04-25 18:10:14.741834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.857 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.857 [2024-04-25 18:10:14.758806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.857 [2024-04-25 18:10:14.758839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.857 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:16.857 [2024-04-25 18:10:14.773049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.857 [2024-04-25 18:10:14.773083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.857 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.790472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.790507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.805305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.805341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.814849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.814883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.829153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.829188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.845873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.845907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.862110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.862145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.879908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.879942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.894988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.895023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.906662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.906698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.923277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.923343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.937912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.937947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.955189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.955224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.116 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.116 [2024-04-25 18:10:14.969709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.116 [2024-04-25 18:10:14.969743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.117 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.117 [2024-04-25 18:10:14.985789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.117 [2024-04-25 18:10:14.985824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.117 2024/04/25 18:10:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.117 [2024-04-25 18:10:15.002373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.117 [2024-04-25 18:10:15.002408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.117 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.117 [2024-04-25 18:10:15.018664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.117 [2024-04-25 18:10:15.018698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.117 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.117 [2024-04-25 18:10:15.035188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.117 [2024-04-25 18:10:15.035224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.117 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.052634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.052671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.068143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.068180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.077173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.077209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.092379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.092434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.108363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.108397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.126268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.126331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.142785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.142819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.158609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.158644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.176401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.176435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.192176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.192212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.210125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.210159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.224764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.224799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.236721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.236755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.254224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.254260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.269034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.269069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.286744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.286781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.376 [2024-04-25 18:10:15.301256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.376 [2024-04-25 18:10:15.301309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.376 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.636 [2024-04-25 18:10:15.317453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.317488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.333334] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.333376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.351520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.351557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.365913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.365947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.382791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.382826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.398026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.398060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.415000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.415035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.430052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.430085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.442390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.442423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.458960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.458995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.475058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.475096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.491784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.491820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.507796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.507832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.524778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.524814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.540709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.540744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.637 [2024-04-25 18:10:15.558699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.637 [2024-04-25 18:10:15.558735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.637 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.572771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.572807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.588811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.588847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.606040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.606078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.621688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.621727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.638764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.638803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.653884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.653920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.663939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.663976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.678362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.678400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.695068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.695108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.710524] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.710560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.720105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.720143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.734738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.734778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.750872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.750908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.767108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.767145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.785361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.785408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.800847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.939 [2024-04-25 18:10:15.800888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.939 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.939 [2024-04-25 18:10:15.816972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.940 [2024-04-25 18:10:15.817011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.940 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:17.940 [2024-04-25 18:10:15.836632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.940 [2024-04-25 18:10:15.836684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.851280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.851350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.868368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.868414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.884680] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.884727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.901003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.901055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.918336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.918384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.933360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.933413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.949575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.949624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.966505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.966550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.981093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.981163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:15.996595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:15.996656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:16.006503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:16.006541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:16.021499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:16.021548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:16.030816] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:16.030858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.219 [2024-04-25 18:10:16.046075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.219 [2024-04-25 18:10:16.046120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.219 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.057547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.057616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.073045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.073084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.090766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.090805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.106700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.106739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.123933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.123973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.220 [2024-04-25 18:10:16.138899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.220 [2024-04-25 18:10:16.138938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.220 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.156750] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.156812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.170891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.170936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.187165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.187209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.203916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.203960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.220606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.220685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.236003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.236054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.247155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.247201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.264152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.264184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.280032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.280064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.296676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.296709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.312896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.312954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.329632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.329666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.345164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.345199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.362485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.362533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.376837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.376883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.393135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.479 [2024-04-25 18:10:16.393183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.479 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.479 [2024-04-25 18:10:16.408860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.480 [2024-04-25 18:10:16.408891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.427145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.427191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.442050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.442095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.458914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.458944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.474949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.474995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.492358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.492388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.508790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.508820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.524826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.524873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.542427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.542458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.739 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.739 [2024-04-25 18:10:16.557221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.739 [2024-04-25 18:10:16.557254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.573749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.573795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.590428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.590457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.606801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.606831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.623977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.624022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.639047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.639079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.650750] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.650797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:18.740 [2024-04-25 18:10:16.667736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.740 [2024-04-25 18:10:16.667773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.740 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.682380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.682414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.699156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.699206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.714942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.714992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.731962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.732001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.748127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.748177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.765457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.765494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.780492] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.780529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.790176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.790208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.804952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.805002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.820623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.820673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.838613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.838668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.853950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.854005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.871337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.871396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.888273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.888338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.902581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.902638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.000 [2024-04-25 18:10:16.919260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.000 [2024-04-25 18:10:16.919323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.000 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:16.933845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:16.933884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:16.950757] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:16.950809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:16.966886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:16.966937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:16.985178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:16.985211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.000175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.000222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.017850] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.017881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.034378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.034409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.051158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.051205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.067851] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.067883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.084219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.084266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.100903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.100951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.118149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.118180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.132931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.132963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.150237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.150295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.164730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.164776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.260 [2024-04-25 18:10:17.181755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.260 [2024-04-25 18:10:17.181817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.260 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.519 [2024-04-25 18:10:17.196672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.196705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.212805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.212841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.229077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.229137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.245349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.245385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.263681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.263735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.278018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.278070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.294712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.294778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.310923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.310960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.328228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.328280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.343168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.343216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.358103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.358153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.375626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.375675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.390338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.390388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.405639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.405697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.415527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.415575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.430328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.430383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.520 [2024-04-25 18:10:17.446570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.520 [2024-04-25 18:10:17.446621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.520 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.779 [2024-04-25 18:10:17.463577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.779 [2024-04-25 18:10:17.463630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.779 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.779 [2024-04-25 18:10:17.480796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.779 [2024-04-25 18:10:17.480854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.779 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.779 [2024-04-25 18:10:17.495415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.779 [2024-04-25 18:10:17.495466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.779 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.779 [2024-04-25 18:10:17.511403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.779 [2024-04-25 18:10:17.511452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.779 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.528267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.528343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.543582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.543631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.555326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.555363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.572272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.572323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.586597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.586647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.603801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.603848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.618936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.618982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.635317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.635348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.651535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.651575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.669343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.669377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.684077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.684125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:19.780 [2024-04-25 18:10:17.695910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.780 [2024-04-25 18:10:17.695959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.780 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.713358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.713389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.727716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.727763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.744447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.744477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.761683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.761715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.776644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.776674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.788281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.788340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.805356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.805389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.820134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.820181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.832208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.832255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.849336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.849367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.864187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.864234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.881227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.881258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.898274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.898333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.914344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.914376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.931072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.931103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.947307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.947338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.040 [2024-04-25 18:10:17.965081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.040 [2024-04-25 18:10:17.965135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.040 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:17.980227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:17.980273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:17.989879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:17.989909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.004477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.004543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.019964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.019995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.038050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.038080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.052896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.052926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.064456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.064486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.081515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.081546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.300 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.300 [2024-04-25 18:10:18.097069] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.300 [2024-04-25 18:10:18.097141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.115729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.115759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.130138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.130183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.142088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.142117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.159762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.159809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.174012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.174057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.189032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.189078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.200760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.200790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.217897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.217945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.301 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.301 [2024-04-25 18:10:18.232724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.301 [2024-04-25 18:10:18.232754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 [2024-04-25 18:10:18.242061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.561 [2024-04-25 18:10:18.242091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 [2024-04-25 18:10:18.256996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.561 [2024-04-25 18:10:18.257028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 [2024-04-25 18:10:18.273941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.561 [2024-04-25 18:10:18.273972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 [2024-04-25 18:10:18.290595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.561 [2024-04-25 18:10:18.290627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 [2024-04-25 18:10:18.305514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.561 [2024-04-25 18:10:18.305546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.561 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.561 00:15:20.561 Latency(us) 00:15:20.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.561 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:20.561 Nvme1n1 : 5.01 12073.53 94.32 0.00 0.00 10587.99 4408.79 22520.55 00:15:20.561 =================================================================================================================== 00:15:20.561 Total : 12073.53 94.32 0.00 0.00 10587.99 4408.79 22520.55 00:15:20.561 [2024-04-25 18:10:18.314547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.314574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.326543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.326572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.338581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.338613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.350574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.350611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.362575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.362610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.374592] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.374627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.386606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.386666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.398604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.398638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.410615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.410649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.422598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.422632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.434608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.434643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.446593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.446628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.458583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.458609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.470587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.470611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.562 [2024-04-25 18:10:18.482617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.562 [2024-04-25 18:10:18.482651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.562 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.494612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.494645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.506599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.506624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.518597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.518621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.530626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.530659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.542631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.542677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.554613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.554636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 [2024-04-25 18:10:18.566610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.821 [2024-04-25 18:10:18.566646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.821 2024/04/25 18:10:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:20.821 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74029) - No such process 00:15:20.821 18:10:18 -- target/zcopy.sh@49 -- # wait 74029 00:15:20.821 18:10:18 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.821 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.821 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:15:20.821 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.821 18:10:18 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:20.821 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.821 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:15:20.821 delay0 00:15:20.821 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.821 18:10:18 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:20.821 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.821 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:15:20.821 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.821 18:10:18 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:20.821 [2024-04-25 18:10:18.752291] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:27.383 Initializing NVMe Controllers 00:15:27.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.383 Initialization complete. Launching workers. 00:15:27.383 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:15:27.383 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:15:27.383 success 213, unsuccess 170, failed 0 00:15:27.383 18:10:24 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:27.383 18:10:24 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:27.383 18:10:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:27.383 18:10:24 -- nvmf/common.sh@116 -- # sync 00:15:27.383 18:10:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:27.383 18:10:24 -- nvmf/common.sh@119 -- # set +e 00:15:27.383 18:10:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:27.383 18:10:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:27.383 rmmod nvme_tcp 00:15:27.383 rmmod nvme_fabrics 00:15:27.383 rmmod nvme_keyring 00:15:27.383 18:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:27.384 18:10:24 -- nvmf/common.sh@123 -- # set -e 00:15:27.384 18:10:24 -- nvmf/common.sh@124 -- # return 0 00:15:27.384 18:10:24 -- nvmf/common.sh@477 -- # '[' -n 73861 ']' 00:15:27.384 18:10:24 -- nvmf/common.sh@478 -- # killprocess 73861 00:15:27.384 18:10:24 -- common/autotest_common.sh@926 -- # '[' -z 73861 ']' 00:15:27.384 18:10:24 -- common/autotest_common.sh@930 -- # kill -0 73861 00:15:27.384 18:10:24 -- common/autotest_common.sh@931 -- # uname 00:15:27.384 18:10:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.384 18:10:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73861 00:15:27.384 18:10:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:27.384 killing process with pid 73861 00:15:27.384 18:10:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:27.384 18:10:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73861' 00:15:27.384 18:10:24 -- common/autotest_common.sh@945 -- # kill 73861 00:15:27.384 18:10:24 -- common/autotest_common.sh@950 -- # wait 73861 00:15:27.384 18:10:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:27.384 18:10:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:27.384 18:10:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:27.384 18:10:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.384 18:10:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:27.384 18:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.384 18:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.384 18:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.384 18:10:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:27.384 00:15:27.384 real 0m24.873s 00:15:27.384 user 0m40.624s 00:15:27.384 sys 0m6.375s 00:15:27.384 18:10:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.384 ************************************ 00:15:27.384 END TEST nvmf_zcopy 00:15:27.384 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:27.384 ************************************ 00:15:27.384 18:10:25 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:27.384 18:10:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:27.384 18:10:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:27.384 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:27.384 ************************************ 00:15:27.384 START TEST nvmf_nmic 00:15:27.384 ************************************ 00:15:27.384 18:10:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:27.641 * Looking for test storage... 00:15:27.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.641 18:10:25 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.641 18:10:25 -- nvmf/common.sh@7 -- # uname -s 00:15:27.641 18:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.641 18:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.641 18:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.641 18:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.641 18:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.641 18:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.641 18:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.641 18:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.641 18:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.641 18:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:27.641 18:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:27.641 18:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.641 18:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.641 18:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.641 18:10:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.641 18:10:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.641 18:10:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.641 18:10:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.641 18:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.641 18:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.641 18:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.641 18:10:25 -- paths/export.sh@5 -- # export PATH 00:15:27.641 18:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.641 18:10:25 -- nvmf/common.sh@46 -- # : 0 00:15:27.641 18:10:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:27.641 18:10:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:27.641 18:10:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:27.641 18:10:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.641 18:10:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.641 18:10:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:27.641 18:10:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:27.641 18:10:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:27.641 18:10:25 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.641 18:10:25 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.641 18:10:25 -- target/nmic.sh@14 -- # nvmftestinit 00:15:27.641 18:10:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:27.641 18:10:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.641 18:10:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:27.641 18:10:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:27.641 18:10:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:27.641 18:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.641 18:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.641 18:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.641 18:10:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:27.641 18:10:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:27.641 18:10:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.641 18:10:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.641 18:10:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:27.641 18:10:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:27.641 18:10:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.641 18:10:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.641 18:10:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.641 18:10:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.641 18:10:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.641 18:10:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.641 18:10:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.641 18:10:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.641 18:10:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:27.641 18:10:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:27.641 Cannot find device "nvmf_tgt_br" 00:15:27.641 18:10:25 -- nvmf/common.sh@154 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.641 Cannot find device "nvmf_tgt_br2" 00:15:27.641 18:10:25 -- nvmf/common.sh@155 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:27.641 18:10:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:27.641 Cannot find device "nvmf_tgt_br" 00:15:27.641 18:10:25 -- nvmf/common.sh@157 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:27.641 Cannot find device "nvmf_tgt_br2" 00:15:27.641 18:10:25 -- nvmf/common.sh@158 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:27.641 18:10:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:27.641 18:10:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.641 18:10:25 -- nvmf/common.sh@161 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.641 18:10:25 -- nvmf/common.sh@162 -- # true 00:15:27.641 18:10:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.641 18:10:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.641 18:10:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.641 18:10:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.641 18:10:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.641 18:10:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.900 18:10:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.900 18:10:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:27.900 18:10:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:27.900 18:10:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:27.900 18:10:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:27.900 18:10:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:27.900 18:10:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:27.900 18:10:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.900 18:10:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.900 18:10:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.900 18:10:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:27.900 18:10:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:27.900 18:10:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.900 18:10:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.900 18:10:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.900 18:10:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.900 18:10:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.900 18:10:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:27.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:27.900 00:15:27.900 --- 10.0.0.2 ping statistics --- 00:15:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.900 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:27.900 18:10:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:27.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:27.900 00:15:27.900 --- 10.0.0.3 ping statistics --- 00:15:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.900 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:27.900 18:10:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:27.900 00:15:27.900 --- 10.0.0.1 ping statistics --- 00:15:27.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.900 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:27.900 18:10:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.900 18:10:25 -- nvmf/common.sh@421 -- # return 0 00:15:27.900 18:10:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:27.900 18:10:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.900 18:10:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:27.900 18:10:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:27.900 18:10:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.900 18:10:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:27.900 18:10:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:27.900 18:10:25 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:27.900 18:10:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.900 18:10:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:27.900 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:27.900 18:10:25 -- nvmf/common.sh@469 -- # nvmfpid=74352 00:15:27.900 18:10:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:27.900 18:10:25 -- nvmf/common.sh@470 -- # waitforlisten 74352 00:15:27.900 18:10:25 -- common/autotest_common.sh@819 -- # '[' -z 74352 ']' 00:15:27.900 18:10:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.900 18:10:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:27.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.900 18:10:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.900 18:10:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:27.900 18:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:27.900 [2024-04-25 18:10:25.787570] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:27.900 [2024-04-25 18:10:25.787683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.158 [2024-04-25 18:10:25.923447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.158 [2024-04-25 18:10:26.043693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.158 [2024-04-25 18:10:26.044084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.158 [2024-04-25 18:10:26.044199] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.158 [2024-04-25 18:10:26.044301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.158 [2024-04-25 18:10:26.044480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.158 [2024-04-25 18:10:26.045207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.158 [2024-04-25 18:10:26.045377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.158 [2024-04-25 18:10:26.045391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.093 18:10:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.093 18:10:26 -- common/autotest_common.sh@852 -- # return 0 00:15:29.093 18:10:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.093 18:10:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 18:10:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.093 18:10:26 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 [2024-04-25 18:10:26.718446] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 Malloc0 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 [2024-04-25 18:10:26.791470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:29.093 test case1: single bdev can't be used in multiple subsystems 00:15:29.093 18:10:26 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@28 -- # nmic_status=0 00:15:29.093 18:10:26 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 [2024-04-25 18:10:26.815332] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:29.093 [2024-04-25 18:10:26.815367] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:29.093 [2024-04-25 18:10:26.815379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.093 2024/04/25 18:10:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.093 request: 00:15:29.093 { 00:15:29.093 "method": "nvmf_subsystem_add_ns", 00:15:29.093 "params": { 00:15:29.093 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:29.093 "namespace": { 00:15:29.093 "bdev_name": "Malloc0" 00:15:29.093 } 00:15:29.093 } 00:15:29.093 } 00:15:29.093 Got JSON-RPC error response 00:15:29.093 GoRPCClient: error on JSON-RPC call 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@29 -- # nmic_status=1 00:15:29.093 18:10:26 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:29.093 18:10:26 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:29.093 Adding namespace failed - expected result. 00:15:29.093 test case2: host connect to nvmf target in multiple paths 00:15:29.093 18:10:26 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:29.093 18:10:26 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:29.093 18:10:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.093 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 [2024-04-25 18:10:26.827409] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:29.093 18:10:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.093 18:10:26 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.093 18:10:26 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:29.352 18:10:27 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.352 18:10:27 -- common/autotest_common.sh@1177 -- # local i=0 00:15:29.352 18:10:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.352 18:10:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:29.352 18:10:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:31.254 18:10:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:31.254 18:10:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:31.254 18:10:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.514 18:10:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:31.514 18:10:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.514 18:10:29 -- common/autotest_common.sh@1187 -- # return 0 00:15:31.514 18:10:29 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:31.514 [global] 00:15:31.514 thread=1 00:15:31.514 invalidate=1 00:15:31.514 rw=write 00:15:31.514 time_based=1 00:15:31.514 runtime=1 00:15:31.514 ioengine=libaio 00:15:31.514 direct=1 00:15:31.514 bs=4096 00:15:31.514 iodepth=1 00:15:31.514 norandommap=0 00:15:31.514 numjobs=1 00:15:31.514 00:15:31.514 verify_dump=1 00:15:31.514 verify_backlog=512 00:15:31.514 verify_state_save=0 00:15:31.514 do_verify=1 00:15:31.514 verify=crc32c-intel 00:15:31.514 [job0] 00:15:31.514 filename=/dev/nvme0n1 00:15:31.514 Could not set queue depth (nvme0n1) 00:15:31.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:31.514 fio-3.35 00:15:31.514 Starting 1 thread 00:15:32.889 00:15:32.889 job0: (groupid=0, jobs=1): err= 0: pid=74456: Thu Apr 25 18:10:30 2024 00:15:32.889 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:32.889 slat (nsec): min=12195, max=87005, avg=16093.00, stdev=5949.10 00:15:32.889 clat (usec): min=110, max=753, avg=151.38, stdev=24.72 00:15:32.889 lat (usec): min=123, max=769, avg=167.47, stdev=26.05 00:15:32.889 clat percentiles (usec): 00:15:32.889 | 1.00th=[ 120], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 135], 00:15:32.889 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 153], 00:15:32.889 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 190], 00:15:32.889 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 293], 99.95th=[ 545], 00:15:32.889 | 99.99th=[ 758] 00:15:32.889 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:15:32.889 slat (usec): min=16, max=165, avg=23.65, stdev= 9.25 00:15:32.889 clat (usec): min=77, max=225, avg=110.24, stdev=19.42 00:15:32.889 lat (usec): min=94, max=349, avg=133.89, stdev=22.74 00:15:32.889 clat percentiles (usec): 00:15:32.889 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:15:32.889 | 30.00th=[ 98], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 111], 00:15:32.889 | 70.00th=[ 116], 80.00th=[ 124], 90.00th=[ 137], 95.00th=[ 149], 00:15:32.890 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 210], 99.95th=[ 212], 00:15:32.890 | 99.99th=[ 227] 00:15:32.890 bw ( KiB/s): min=13512, max=13512, per=95.57%, avg=13512.00, stdev= 0.00, samples=1 00:15:32.890 iops : min= 3378, max= 3378, avg=3378.00, stdev= 0.00, samples=1 00:15:32.890 lat (usec) : 100=19.18%, 250=80.70%, 500=0.09%, 750=0.02%, 1000=0.02% 00:15:32.890 cpu : usr=2.60%, sys=9.70%, ctx=6610, majf=0, minf=2 00:15:32.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.890 issued rwts: total=3072,3538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.890 00:15:32.890 Run status group 0 (all jobs): 00:15:32.890 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:15:32.890 WRITE: bw=13.8MiB/s (14.5MB/s), 13.8MiB/s-13.8MiB/s (14.5MB/s-14.5MB/s), io=13.8MiB (14.5MB), run=1001-1001msec 00:15:32.890 00:15:32.890 Disk stats (read/write): 00:15:32.890 nvme0n1: ios=2884/3072, merge=0/0, ticks=478/400, in_queue=878, util=91.58% 00:15:32.890 18:10:30 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:32.890 18:10:30 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.890 18:10:30 -- common/autotest_common.sh@1198 -- # local i=0 00:15:32.890 18:10:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:32.890 18:10:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.890 18:10:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:32.890 18:10:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.890 18:10:30 -- common/autotest_common.sh@1210 -- # return 0 00:15:32.890 18:10:30 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:32.890 18:10:30 -- target/nmic.sh@53 -- # nvmftestfini 00:15:32.890 18:10:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:32.890 18:10:30 -- nvmf/common.sh@116 -- # sync 00:15:32.890 18:10:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:32.890 18:10:30 -- nvmf/common.sh@119 -- # set +e 00:15:32.890 18:10:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:32.890 18:10:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:32.890 rmmod nvme_tcp 00:15:32.890 rmmod nvme_fabrics 00:15:32.890 rmmod nvme_keyring 00:15:32.890 18:10:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:32.890 18:10:30 -- nvmf/common.sh@123 -- # set -e 00:15:32.890 18:10:30 -- nvmf/common.sh@124 -- # return 0 00:15:32.890 18:10:30 -- nvmf/common.sh@477 -- # '[' -n 74352 ']' 00:15:32.890 18:10:30 -- nvmf/common.sh@478 -- # killprocess 74352 00:15:32.890 18:10:30 -- common/autotest_common.sh@926 -- # '[' -z 74352 ']' 00:15:32.890 18:10:30 -- common/autotest_common.sh@930 -- # kill -0 74352 00:15:32.890 18:10:30 -- common/autotest_common.sh@931 -- # uname 00:15:32.890 18:10:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.890 18:10:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74352 00:15:32.890 18:10:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.890 18:10:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.890 killing process with pid 74352 00:15:32.890 18:10:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74352' 00:15:32.890 18:10:30 -- common/autotest_common.sh@945 -- # kill 74352 00:15:32.890 18:10:30 -- common/autotest_common.sh@950 -- # wait 74352 00:15:33.160 18:10:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.160 18:10:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:33.160 18:10:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:33.160 18:10:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.160 18:10:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:33.160 18:10:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.160 18:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.160 18:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.160 18:10:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:33.160 00:15:33.160 real 0m5.722s 00:15:33.160 user 0m19.066s 00:15:33.160 sys 0m1.296s 00:15:33.160 18:10:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.160 18:10:30 -- common/autotest_common.sh@10 -- # set +x 00:15:33.160 ************************************ 00:15:33.160 END TEST nvmf_nmic 00:15:33.160 ************************************ 00:15:33.160 18:10:31 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:33.160 18:10:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:33.161 18:10:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.161 18:10:31 -- common/autotest_common.sh@10 -- # set +x 00:15:33.161 ************************************ 00:15:33.161 START TEST nvmf_fio_target 00:15:33.161 ************************************ 00:15:33.161 18:10:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:33.419 * Looking for test storage... 00:15:33.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.420 18:10:31 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.420 18:10:31 -- nvmf/common.sh@7 -- # uname -s 00:15:33.420 18:10:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.420 18:10:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.420 18:10:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.420 18:10:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.420 18:10:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.420 18:10:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.420 18:10:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.420 18:10:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.420 18:10:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.420 18:10:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:33.420 18:10:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:33.420 18:10:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.420 18:10:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.420 18:10:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.420 18:10:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.420 18:10:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.420 18:10:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.420 18:10:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.420 18:10:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.420 18:10:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.420 18:10:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.420 18:10:31 -- paths/export.sh@5 -- # export PATH 00:15:33.420 18:10:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.420 18:10:31 -- nvmf/common.sh@46 -- # : 0 00:15:33.420 18:10:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.420 18:10:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.420 18:10:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.420 18:10:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.420 18:10:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.420 18:10:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.420 18:10:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.420 18:10:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.420 18:10:31 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.420 18:10:31 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.420 18:10:31 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.420 18:10:31 -- target/fio.sh@16 -- # nvmftestinit 00:15:33.420 18:10:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.420 18:10:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.420 18:10:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.420 18:10:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.420 18:10:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.420 18:10:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.420 18:10:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.420 18:10:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.420 18:10:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:33.420 18:10:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:33.420 18:10:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.420 18:10:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.420 18:10:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:33.420 18:10:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:33.420 18:10:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.420 18:10:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.420 18:10:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.420 18:10:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.420 18:10:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.420 18:10:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.420 18:10:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.420 18:10:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.420 18:10:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:33.420 18:10:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:33.420 Cannot find device "nvmf_tgt_br" 00:15:33.420 18:10:31 -- nvmf/common.sh@154 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.420 Cannot find device "nvmf_tgt_br2" 00:15:33.420 18:10:31 -- nvmf/common.sh@155 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:33.420 18:10:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:33.420 Cannot find device "nvmf_tgt_br" 00:15:33.420 18:10:31 -- nvmf/common.sh@157 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:33.420 Cannot find device "nvmf_tgt_br2" 00:15:33.420 18:10:31 -- nvmf/common.sh@158 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:33.420 18:10:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:33.420 18:10:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.420 18:10:31 -- nvmf/common.sh@161 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.420 18:10:31 -- nvmf/common.sh@162 -- # true 00:15:33.420 18:10:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.420 18:10:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.420 18:10:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.420 18:10:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.420 18:10:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.420 18:10:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.420 18:10:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.420 18:10:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:33.420 18:10:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:33.420 18:10:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:33.420 18:10:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:33.420 18:10:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:33.679 18:10:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:33.679 18:10:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.679 18:10:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.679 18:10:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.679 18:10:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:33.679 18:10:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:33.679 18:10:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.679 18:10:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.679 18:10:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.679 18:10:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.679 18:10:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.679 18:10:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:33.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:33.679 00:15:33.679 --- 10.0.0.2 ping statistics --- 00:15:33.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.679 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:33.679 18:10:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:33.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:33.679 00:15:33.679 --- 10.0.0.3 ping statistics --- 00:15:33.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.679 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:33.679 18:10:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:33.679 00:15:33.679 --- 10.0.0.1 ping statistics --- 00:15:33.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.679 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:33.679 18:10:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.679 18:10:31 -- nvmf/common.sh@421 -- # return 0 00:15:33.679 18:10:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:33.679 18:10:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.679 18:10:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:33.679 18:10:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:33.679 18:10:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.679 18:10:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:33.679 18:10:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:33.679 18:10:31 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:33.679 18:10:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:33.679 18:10:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:33.679 18:10:31 -- common/autotest_common.sh@10 -- # set +x 00:15:33.679 18:10:31 -- nvmf/common.sh@469 -- # nvmfpid=74634 00:15:33.679 18:10:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.679 18:10:31 -- nvmf/common.sh@470 -- # waitforlisten 74634 00:15:33.679 18:10:31 -- common/autotest_common.sh@819 -- # '[' -z 74634 ']' 00:15:33.679 18:10:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.679 18:10:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:33.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.679 18:10:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.679 18:10:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:33.679 18:10:31 -- common/autotest_common.sh@10 -- # set +x 00:15:33.679 [2024-04-25 18:10:31.547484] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:33.679 [2024-04-25 18:10:31.547578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.937 [2024-04-25 18:10:31.692179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.937 [2024-04-25 18:10:31.810868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.937 [2024-04-25 18:10:31.811016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.937 [2024-04-25 18:10:31.811028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.937 [2024-04-25 18:10:31.811037] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.937 [2024-04-25 18:10:31.811205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.937 [2024-04-25 18:10:31.811330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.937 [2024-04-25 18:10:31.811424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.937 [2024-04-25 18:10:31.811428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.871 18:10:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:34.871 18:10:32 -- common/autotest_common.sh@852 -- # return 0 00:15:34.871 18:10:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:34.871 18:10:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:34.871 18:10:32 -- common/autotest_common.sh@10 -- # set +x 00:15:34.871 18:10:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.871 18:10:32 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.871 [2024-04-25 18:10:32.798408] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.129 18:10:32 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.386 18:10:33 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:35.386 18:10:33 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.643 18:10:33 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:35.643 18:10:33 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.900 18:10:33 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:35.900 18:10:33 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:36.159 18:10:33 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:36.159 18:10:33 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:36.417 18:10:34 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:36.675 18:10:34 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:36.675 18:10:34 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:36.933 18:10:34 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:36.933 18:10:34 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:37.192 18:10:34 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:37.192 18:10:34 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:37.451 18:10:35 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.709 18:10:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:37.709 18:10:35 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.967 18:10:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:37.967 18:10:35 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.967 18:10:35 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.225 [2024-04-25 18:10:36.081330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.225 18:10:36 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:38.483 18:10:36 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:38.741 18:10:36 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.000 18:10:36 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:39.000 18:10:36 -- common/autotest_common.sh@1177 -- # local i=0 00:15:39.000 18:10:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.000 18:10:36 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:39.000 18:10:36 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:39.000 18:10:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:40.898 18:10:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:40.898 18:10:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:40.898 18:10:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.898 18:10:38 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:15:40.898 18:10:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.898 18:10:38 -- common/autotest_common.sh@1187 -- # return 0 00:15:40.898 18:10:38 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:40.898 [global] 00:15:40.898 thread=1 00:15:40.898 invalidate=1 00:15:40.898 rw=write 00:15:40.898 time_based=1 00:15:40.898 runtime=1 00:15:40.898 ioengine=libaio 00:15:40.898 direct=1 00:15:40.898 bs=4096 00:15:40.898 iodepth=1 00:15:40.898 norandommap=0 00:15:40.898 numjobs=1 00:15:40.898 00:15:40.898 verify_dump=1 00:15:40.898 verify_backlog=512 00:15:40.898 verify_state_save=0 00:15:40.898 do_verify=1 00:15:40.898 verify=crc32c-intel 00:15:40.898 [job0] 00:15:40.898 filename=/dev/nvme0n1 00:15:40.898 [job1] 00:15:40.898 filename=/dev/nvme0n2 00:15:40.898 [job2] 00:15:40.898 filename=/dev/nvme0n3 00:15:40.898 [job3] 00:15:40.898 filename=/dev/nvme0n4 00:15:40.898 Could not set queue depth (nvme0n1) 00:15:40.898 Could not set queue depth (nvme0n2) 00:15:40.898 Could not set queue depth (nvme0n3) 00:15:40.898 Could not set queue depth (nvme0n4) 00:15:41.156 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.156 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.156 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.156 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.156 fio-3.35 00:15:41.156 Starting 4 threads 00:15:42.528 00:15:42.528 job0: (groupid=0, jobs=1): err= 0: pid=74934: Thu Apr 25 18:10:40 2024 00:15:42.528 read: IOPS=3043, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:15:42.528 slat (nsec): min=12691, max=47942, avg=16212.14, stdev=3170.19 00:15:42.528 clat (usec): min=126, max=234, avg=155.02, stdev=11.11 00:15:42.528 lat (usec): min=140, max=250, avg=171.23, stdev=11.58 00:15:42.528 clat percentiles (usec): 00:15:42.528 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:15:42.528 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:15:42.529 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:15:42.529 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 198], 99.95th=[ 225], 00:15:42.529 | 99.99th=[ 235] 00:15:42.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:42.529 slat (usec): min=18, max=107, avg=23.85, stdev= 4.89 00:15:42.529 clat (usec): min=98, max=2550, avg=128.28, stdev=50.54 00:15:42.529 lat (usec): min=120, max=2570, avg=152.12, stdev=50.79 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:15:42.529 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:15:42.529 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:15:42.529 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 265], 99.95th=[ 1270], 00:15:42.529 | 99.99th=[ 2540] 00:15:42.529 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:15:42.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:42.529 lat (usec) : 100=0.10%, 250=99.84%, 500=0.02%, 750=0.02% 00:15:42.529 lat (msec) : 2=0.02%, 4=0.02% 00:15:42.529 cpu : usr=2.10%, sys=9.10%, ctx=6121, majf=0, minf=11 00:15:42.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 issued rwts: total=3047,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.529 job1: (groupid=0, jobs=1): err= 0: pid=74935: Thu Apr 25 18:10:40 2024 00:15:42.529 read: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec) 00:15:42.529 slat (nsec): min=12041, max=43185, avg=15116.14, stdev=3194.30 00:15:42.529 clat (usec): min=127, max=225, avg=156.09, stdev=11.31 00:15:42.529 lat (usec): min=140, max=239, avg=171.20, stdev=11.86 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:15:42.529 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:15:42.529 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:15:42.529 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 212], 00:15:42.529 | 99.99th=[ 227] 00:15:42.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:42.529 slat (nsec): min=17314, max=99249, avg=22594.38, stdev=5608.61 00:15:42.529 clat (usec): min=98, max=1669, avg=128.77, stdev=39.55 00:15:42.529 lat (usec): min=117, max=1689, avg=151.36, stdev=40.07 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:15:42.529 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:15:42.529 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:15:42.529 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 1565], 00:15:42.529 | 99.99th=[ 1663] 00:15:42.529 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:15:42.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:42.529 lat (usec) : 100=0.02%, 250=99.95% 00:15:42.529 lat (msec) : 2=0.03% 00:15:42.529 cpu : usr=2.50%, sys=8.30%, ctx=6140, majf=0, minf=5 00:15:42.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 issued rwts: total=3067,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.529 job2: (groupid=0, jobs=1): err= 0: pid=74936: Thu Apr 25 18:10:40 2024 00:15:42.529 read: IOPS=2787, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:15:42.529 slat (nsec): min=14083, max=65053, avg=16697.22, stdev=3619.46 00:15:42.529 clat (usec): min=135, max=534, avg=161.64, stdev=14.63 00:15:42.529 lat (usec): min=150, max=552, avg=178.34, stdev=15.56 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:15:42.529 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:15:42.529 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:15:42.529 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 225], 99.95th=[ 285], 00:15:42.529 | 99.99th=[ 537] 00:15:42.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:42.529 slat (nsec): min=19809, max=80672, avg=24381.69, stdev=4894.30 00:15:42.529 clat (usec): min=104, max=2113, avg=135.78, stdev=38.94 00:15:42.529 lat (usec): min=127, max=2136, avg=160.17, stdev=39.30 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:15:42.529 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:15:42.529 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:15:42.529 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 367], 99.95th=[ 562], 00:15:42.529 | 99.99th=[ 2114] 00:15:42.529 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:15:42.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:42.529 lat (usec) : 250=99.90%, 500=0.05%, 750=0.03% 00:15:42.529 lat (msec) : 4=0.02% 00:15:42.529 cpu : usr=1.70%, sys=9.40%, ctx=5862, majf=0, minf=8 00:15:42.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 issued rwts: total=2790,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.529 job3: (groupid=0, jobs=1): err= 0: pid=74937: Thu Apr 25 18:10:40 2024 00:15:42.529 read: IOPS=2799, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:15:42.529 slat (nsec): min=12263, max=43823, avg=15202.96, stdev=2872.93 00:15:42.529 clat (usec): min=135, max=1120, avg=162.44, stdev=23.21 00:15:42.529 lat (usec): min=149, max=1137, avg=177.64, stdev=23.51 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:15:42.529 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:15:42.529 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:15:42.529 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 392], 99.95th=[ 400], 00:15:42.529 | 99.99th=[ 1123] 00:15:42.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:42.529 slat (nsec): min=16489, max=81699, avg=22128.25, stdev=4591.42 00:15:42.529 clat (usec): min=104, max=1629, avg=138.16, stdev=34.47 00:15:42.529 lat (usec): min=123, max=1649, avg=160.28, stdev=34.86 00:15:42.529 clat percentiles (usec): 00:15:42.529 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:15:42.529 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:15:42.529 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:15:42.529 | 99.00th=[ 176], 99.50th=[ 194], 99.90th=[ 537], 99.95th=[ 791], 00:15:42.529 | 99.99th=[ 1631] 00:15:42.529 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:15:42.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:42.529 lat (usec) : 250=99.69%, 500=0.22%, 750=0.03%, 1000=0.02% 00:15:42.529 lat (msec) : 2=0.03% 00:15:42.529 cpu : usr=2.40%, sys=7.80%, ctx=5874, majf=0, minf=11 00:15:42.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.529 issued rwts: total=2802,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.529 00:15:42.529 Run status group 0 (all jobs): 00:15:42.529 READ: bw=45.7MiB/s (47.9MB/s), 10.9MiB/s-12.0MiB/s (11.4MB/s-12.5MB/s), io=45.7MiB (47.9MB), run=1001-1001msec 00:15:42.529 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:15:42.529 00:15:42.529 Disk stats (read/write): 00:15:42.529 nvme0n1: ios=2610/2681, merge=0/0, ticks=437/370, in_queue=807, util=87.58% 00:15:42.529 nvme0n2: ios=2595/2689, merge=0/0, ticks=457/376, in_queue=833, util=89.24% 00:15:42.529 nvme0n3: ios=2444/2560, merge=0/0, ticks=417/370, in_queue=787, util=89.11% 00:15:42.529 nvme0n4: ios=2456/2560, merge=0/0, ticks=412/373, in_queue=785, util=89.57% 00:15:42.529 18:10:40 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:42.529 [global] 00:15:42.529 thread=1 00:15:42.529 invalidate=1 00:15:42.529 rw=randwrite 00:15:42.529 time_based=1 00:15:42.529 runtime=1 00:15:42.529 ioengine=libaio 00:15:42.529 direct=1 00:15:42.529 bs=4096 00:15:42.529 iodepth=1 00:15:42.529 norandommap=0 00:15:42.529 numjobs=1 00:15:42.529 00:15:42.529 verify_dump=1 00:15:42.529 verify_backlog=512 00:15:42.529 verify_state_save=0 00:15:42.529 do_verify=1 00:15:42.529 verify=crc32c-intel 00:15:42.529 [job0] 00:15:42.529 filename=/dev/nvme0n1 00:15:42.529 [job1] 00:15:42.529 filename=/dev/nvme0n2 00:15:42.529 [job2] 00:15:42.529 filename=/dev/nvme0n3 00:15:42.529 [job3] 00:15:42.529 filename=/dev/nvme0n4 00:15:42.529 Could not set queue depth (nvme0n1) 00:15:42.529 Could not set queue depth (nvme0n2) 00:15:42.529 Could not set queue depth (nvme0n3) 00:15:42.529 Could not set queue depth (nvme0n4) 00:15:42.529 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.529 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.529 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.529 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.529 fio-3.35 00:15:42.529 Starting 4 threads 00:15:43.907 00:15:43.907 job0: (groupid=0, jobs=1): err= 0: pid=74990: Thu Apr 25 18:10:41 2024 00:15:43.907 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:15:43.907 slat (nsec): min=12596, max=53617, avg=18640.42, stdev=5704.27 00:15:43.907 clat (usec): min=125, max=709, avg=286.05, stdev=67.48 00:15:43.907 lat (usec): min=141, max=727, avg=304.69, stdev=69.06 00:15:43.907 clat percentiles (usec): 00:15:43.907 | 1.00th=[ 139], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 229], 00:15:43.907 | 30.00th=[ 241], 40.00th=[ 260], 50.00th=[ 293], 60.00th=[ 310], 00:15:43.907 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 371], 00:15:43.907 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 709], 00:15:43.907 | 99.99th=[ 709] 00:15:43.907 write: IOPS=1978, BW=7912KiB/s (8102kB/s)(7912KiB/1000msec); 0 zone resets 00:15:43.907 slat (nsec): min=11449, max=79767, avg=29389.75, stdev=9401.05 00:15:43.907 clat (usec): min=107, max=3399, avg=235.50, stdev=91.89 00:15:43.907 lat (usec): min=129, max=3428, avg=264.89, stdev=93.56 00:15:43.907 clat percentiles (usec): 00:15:43.907 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 194], 00:15:43.907 | 30.00th=[ 215], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:15:43.907 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 285], 00:15:43.907 | 99.00th=[ 355], 99.50th=[ 453], 99.90th=[ 1237], 99.95th=[ 3392], 00:15:43.907 | 99.99th=[ 3392] 00:15:43.907 bw ( KiB/s): min= 8159, max= 8159, per=23.98%, avg=8159.00, stdev= 0.00, samples=1 00:15:43.907 iops : min= 2039, max= 2039, avg=2039.00, stdev= 0.00, samples=1 00:15:43.907 lat (usec) : 250=53.44%, 500=45.70%, 750=0.68%, 1000=0.06% 00:15:43.907 lat (msec) : 2=0.09%, 4=0.03% 00:15:43.907 cpu : usr=2.10%, sys=6.20%, ctx=3523, majf=0, minf=9 00:15:43.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.907 issued rwts: total=1536,1978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.907 job1: (groupid=0, jobs=1): err= 0: pid=74991: Thu Apr 25 18:10:41 2024 00:15:43.907 read: IOPS=1667, BW=6669KiB/s (6829kB/s)(6676KiB/1001msec) 00:15:43.907 slat (nsec): min=10418, max=65382, avg=16472.55, stdev=5431.06 00:15:43.907 clat (usec): min=122, max=4162, avg=291.91, stdev=186.85 00:15:43.907 lat (usec): min=137, max=4178, avg=308.38, stdev=187.20 00:15:43.907 clat percentiles (usec): 00:15:43.907 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 200], 20.00th=[ 225], 00:15:43.907 | 30.00th=[ 237], 40.00th=[ 251], 50.00th=[ 281], 60.00th=[ 310], 00:15:43.907 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 388], 00:15:43.907 | 99.00th=[ 553], 99.50th=[ 1106], 99.90th=[ 3949], 99.95th=[ 4178], 00:15:43.907 | 99.99th=[ 4178] 00:15:43.907 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:43.907 slat (usec): min=13, max=228, avg=25.43, stdev= 8.07 00:15:43.907 clat (usec): min=91, max=437, avg=207.90, stdev=66.31 00:15:43.907 lat (usec): min=115, max=463, avg=233.33, stdev=65.69 00:15:43.907 clat percentiles (usec): 00:15:43.907 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 122], 00:15:43.908 | 30.00th=[ 174], 40.00th=[ 194], 50.00th=[ 227], 60.00th=[ 245], 00:15:43.908 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:15:43.908 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 400], 99.95th=[ 404], 00:15:43.908 | 99.99th=[ 437] 00:15:43.908 bw ( KiB/s): min= 8159, max= 8159, per=23.98%, avg=8159.00, stdev= 0.00, samples=1 00:15:43.908 iops : min= 2039, max= 2039, avg=2039.00, stdev= 0.00, samples=1 00:15:43.908 lat (usec) : 100=1.78%, 250=51.41%, 500=45.87%, 750=0.65%, 1000=0.05% 00:15:43.908 lat (msec) : 2=0.13%, 4=0.08%, 10=0.03% 00:15:43.908 cpu : usr=2.30%, sys=5.40%, ctx=3733, majf=0, minf=8 00:15:43.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 issued rwts: total=1669,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.908 job2: (groupid=0, jobs=1): err= 0: pid=74992: Thu Apr 25 18:10:41 2024 00:15:43.908 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(9.83MiB/1001msec) 00:15:43.908 slat (nsec): min=8642, max=60271, avg=15561.97, stdev=4494.33 00:15:43.908 clat (usec): min=131, max=623, avg=210.65, stdev=91.33 00:15:43.908 lat (usec): min=145, max=638, avg=226.21, stdev=91.04 00:15:43.908 clat percentiles (usec): 00:15:43.908 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:15:43.908 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 184], 00:15:43.908 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 310], 95.00th=[ 461], 00:15:43.908 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 562], 00:15:43.908 | 99.99th=[ 627] 00:15:43.908 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:43.908 slat (nsec): min=10574, max=94716, avg=22955.82, stdev=6715.23 00:15:43.908 clat (usec): min=99, max=2374, avg=141.95, stdev=58.36 00:15:43.908 lat (usec): min=118, max=2394, avg=164.91, stdev=57.62 00:15:43.908 clat percentiles (usec): 00:15:43.908 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 115], 00:15:43.908 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 126], 60.00th=[ 133], 00:15:43.908 | 70.00th=[ 141], 80.00th=[ 161], 90.00th=[ 212], 95.00th=[ 229], 00:15:43.908 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 388], 00:15:43.908 | 99.99th=[ 2376] 00:15:43.908 bw ( KiB/s): min=12239, max=12239, per=35.97%, avg=12239.00, stdev= 0.00, samples=1 00:15:43.908 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:15:43.908 lat (usec) : 100=0.02%, 250=88.58%, 500=10.64%, 750=0.75% 00:15:43.908 lat (msec) : 4=0.02% 00:15:43.908 cpu : usr=1.80%, sys=7.30%, ctx=5079, majf=0, minf=7 00:15:43.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 issued rwts: total=2517,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.908 job3: (groupid=0, jobs=1): err= 0: pid=74993: Thu Apr 25 18:10:41 2024 00:15:43.908 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:43.908 slat (nsec): min=10082, max=57072, avg=15911.54, stdev=5065.28 00:15:43.908 clat (usec): min=175, max=4315, avg=299.35, stdev=167.03 00:15:43.908 lat (usec): min=192, max=4338, avg=315.26, stdev=167.47 00:15:43.908 clat percentiles (usec): 00:15:43.908 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 241], 00:15:43.908 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 293], 60.00th=[ 314], 00:15:43.908 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 367], 00:15:43.908 | 99.00th=[ 424], 99.50th=[ 545], 99.90th=[ 3884], 99.95th=[ 4293], 00:15:43.908 | 99.99th=[ 4293] 00:15:43.908 write: IOPS=1926, BW=7704KiB/s (7889kB/s)(7712KiB/1001msec); 0 zone resets 00:15:43.908 slat (usec): min=10, max=142, avg=21.87, stdev= 7.68 00:15:43.908 clat (usec): min=117, max=531, avg=242.55, stdev=37.00 00:15:43.908 lat (usec): min=135, max=546, avg=264.43, stdev=39.17 00:15:43.908 clat percentiles (usec): 00:15:43.908 | 1.00th=[ 128], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 212], 00:15:43.908 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:15:43.908 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:15:43.908 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 457], 99.95th=[ 529], 00:15:43.908 | 99.99th=[ 529] 00:15:43.908 bw ( KiB/s): min= 8159, max= 8159, per=23.98%, avg=8159.00, stdev= 0.00, samples=1 00:15:43.908 iops : min= 2039, max= 2039, avg=2039.00, stdev= 0.00, samples=1 00:15:43.908 lat (usec) : 250=43.10%, 500=56.64%, 750=0.09% 00:15:43.908 lat (msec) : 2=0.06%, 4=0.09%, 10=0.03% 00:15:43.908 cpu : usr=1.10%, sys=5.60%, ctx=3470, majf=0, minf=21 00:15:43.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.908 issued rwts: total=1536,1928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.908 00:15:43.908 Run status group 0 (all jobs): 00:15:43.908 READ: bw=28.3MiB/s (29.7MB/s), 6138KiB/s-9.82MiB/s (6285kB/s-10.3MB/s), io=28.4MiB (29.7MB), run=1000-1001msec 00:15:43.908 WRITE: bw=33.2MiB/s (34.8MB/s), 7704KiB/s-9.99MiB/s (7889kB/s-10.5MB/s), io=33.3MiB (34.9MB), run=1000-1001msec 00:15:43.908 00:15:43.908 Disk stats (read/write): 00:15:43.908 nvme0n1: ios=1535/1536, merge=0/0, ticks=463/380, in_queue=843, util=88.68% 00:15:43.908 nvme0n2: ios=1470/1536, merge=0/0, ticks=480/361, in_queue=841, util=89.39% 00:15:43.908 nvme0n3: ios=2198/2560, merge=0/0, ticks=407/385, in_queue=792, util=89.41% 00:15:43.908 nvme0n4: ios=1416/1536, merge=0/0, ticks=407/374, in_queue=781, util=89.26% 00:15:43.908 18:10:41 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:43.908 [global] 00:15:43.908 thread=1 00:15:43.908 invalidate=1 00:15:43.908 rw=write 00:15:43.908 time_based=1 00:15:43.908 runtime=1 00:15:43.908 ioengine=libaio 00:15:43.908 direct=1 00:15:43.908 bs=4096 00:15:43.908 iodepth=128 00:15:43.908 norandommap=0 00:15:43.908 numjobs=1 00:15:43.908 00:15:43.908 verify_dump=1 00:15:43.908 verify_backlog=512 00:15:43.908 verify_state_save=0 00:15:43.908 do_verify=1 00:15:43.908 verify=crc32c-intel 00:15:43.908 [job0] 00:15:43.908 filename=/dev/nvme0n1 00:15:43.908 [job1] 00:15:43.908 filename=/dev/nvme0n2 00:15:43.908 [job2] 00:15:43.908 filename=/dev/nvme0n3 00:15:43.908 [job3] 00:15:43.908 filename=/dev/nvme0n4 00:15:43.908 Could not set queue depth (nvme0n1) 00:15:43.908 Could not set queue depth (nvme0n2) 00:15:43.908 Could not set queue depth (nvme0n3) 00:15:43.908 Could not set queue depth (nvme0n4) 00:15:43.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.908 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.908 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.908 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.908 fio-3.35 00:15:43.908 Starting 4 threads 00:15:45.284 00:15:45.284 job0: (groupid=0, jobs=1): err= 0: pid=75047: Thu Apr 25 18:10:42 2024 00:15:45.284 read: IOPS=2312, BW=9249KiB/s (9470kB/s)(9304KiB/1006msec) 00:15:45.284 slat (usec): min=3, max=13761, avg=191.14, stdev=972.38 00:15:45.284 clat (usec): min=1835, max=45638, avg=23775.53, stdev=5711.91 00:15:45.284 lat (usec): min=5401, max=45654, avg=23966.66, stdev=5800.10 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[ 5932], 5.00th=[16909], 10.00th=[19530], 20.00th=[20579], 00:15:45.284 | 30.00th=[21103], 40.00th=[21627], 50.00th=[22152], 60.00th=[23987], 00:15:45.284 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31851], 95.00th=[34341], 00:15:45.284 | 99.00th=[40633], 99.50th=[44303], 99.90th=[45351], 99.95th=[45876], 00:15:45.284 | 99.99th=[45876] 00:15:45.284 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:15:45.284 slat (usec): min=12, max=6846, avg=209.22, stdev=796.32 00:15:45.284 clat (usec): min=14994, max=55898, avg=27886.49, stdev=9049.30 00:15:45.284 lat (usec): min=15049, max=55963, avg=28095.72, stdev=9114.00 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[18220], 5.00th=[20579], 10.00th=[20841], 20.00th=[21365], 00:15:45.284 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22414], 60.00th=[27657], 00:15:45.284 | 70.00th=[30802], 80.00th=[34341], 90.00th=[42206], 95.00th=[46924], 00:15:45.284 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:15:45.284 | 99.99th=[55837] 00:15:45.284 bw ( KiB/s): min= 8192, max=12288, per=18.02%, avg=10240.00, stdev=2896.31, samples=2 00:15:45.284 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:15:45.284 lat (msec) : 2=0.02%, 10=0.86%, 20=6.10%, 50=91.38%, 100=1.64% 00:15:45.284 cpu : usr=2.49%, sys=8.36%, ctx=309, majf=0, minf=8 00:15:45.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:45.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.284 issued rwts: total=2326,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.284 job1: (groupid=0, jobs=1): err= 0: pid=75048: Thu Apr 25 18:10:42 2024 00:15:45.284 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:15:45.284 slat (usec): min=5, max=8420, avg=172.75, stdev=870.59 00:15:45.284 clat (usec): min=15496, max=34956, avg=22358.05, stdev=2574.20 00:15:45.284 lat (usec): min=15518, max=34990, avg=22530.80, stdev=2661.51 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[16450], 5.00th=[18744], 10.00th=[20055], 20.00th=[20579], 00:15:45.284 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:15:45.284 | 70.00th=[23200], 80.00th=[24511], 90.00th=[26084], 95.00th=[26870], 00:15:45.284 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33817], 99.95th=[34866], 00:15:45.284 | 99.99th=[34866] 00:15:45.284 write: IOPS=3010, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1005msec); 0 zone resets 00:15:45.284 slat (usec): min=11, max=8749, avg=176.23, stdev=785.81 00:15:45.284 clat (usec): min=3490, max=42217, avg=22922.24, stdev=4900.92 00:15:45.284 lat (usec): min=5841, max=42245, avg=23098.47, stdev=4939.38 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[12125], 5.00th=[17171], 10.00th=[17695], 20.00th=[20317], 00:15:45.284 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21890], 60.00th=[22414], 00:15:45.284 | 70.00th=[23462], 80.00th=[25035], 90.00th=[30278], 95.00th=[32375], 00:15:45.284 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:15:45.284 | 99.99th=[42206] 00:15:45.284 bw ( KiB/s): min=10896, max=12312, per=20.42%, avg=11604.00, stdev=1001.26, samples=2 00:15:45.284 iops : min= 2724, max= 3078, avg=2901.00, stdev=250.32, samples=2 00:15:45.284 lat (msec) : 4=0.02%, 10=0.29%, 20=14.12%, 50=85.57% 00:15:45.284 cpu : usr=3.39%, sys=8.76%, ctx=303, majf=0, minf=9 00:15:45.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:45.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.284 issued rwts: total=2560,3026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.284 job2: (groupid=0, jobs=1): err= 0: pid=75049: Thu Apr 25 18:10:42 2024 00:15:45.284 read: IOPS=4126, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1002msec) 00:15:45.284 slat (usec): min=9, max=3873, avg=108.53, stdev=477.05 00:15:45.284 clat (usec): min=381, max=20088, avg=14294.35, stdev=2264.51 00:15:45.284 lat (usec): min=3519, max=20105, avg=14402.87, stdev=2231.82 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11469], 20.00th=[12780], 00:15:45.284 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14615], 00:15:45.284 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:15:45.284 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:15:45.284 | 99.99th=[20055] 00:15:45.284 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:45.284 slat (usec): min=8, max=4659, avg=111.05, stdev=473.31 00:15:45.284 clat (usec): min=7858, max=21986, avg=14616.63, stdev=2589.51 00:15:45.284 lat (usec): min=7875, max=22012, avg=14727.68, stdev=2596.56 00:15:45.284 clat percentiles (usec): 00:15:45.284 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11600], 20.00th=[12780], 00:15:45.284 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:15:45.284 | 70.00th=[15664], 80.00th=[17171], 90.00th=[18220], 95.00th=[19268], 00:15:45.284 | 99.00th=[21365], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:15:45.284 | 99.99th=[21890] 00:15:45.284 bw ( KiB/s): min=15672, max=20521, per=31.85%, avg=18096.50, stdev=3428.76, samples=2 00:15:45.284 iops : min= 3918, max= 5130, avg=4524.00, stdev=857.01, samples=2 00:15:45.284 lat (usec) : 500=0.01% 00:15:45.284 lat (msec) : 4=0.27%, 10=0.51%, 20=97.60%, 50=1.60% 00:15:45.284 cpu : usr=4.30%, sys=13.79%, ctx=687, majf=0, minf=7 00:15:45.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:45.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.284 issued rwts: total=4135,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.284 job3: (groupid=0, jobs=1): err= 0: pid=75050: Thu Apr 25 18:10:42 2024 00:15:45.284 read: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1002msec) 00:15:45.284 slat (usec): min=4, max=8995, avg=121.15, stdev=590.31 00:15:45.284 clat (usec): min=1327, max=30471, avg=15972.56, stdev=4661.08 00:15:45.284 lat (usec): min=1344, max=30499, avg=16093.71, stdev=4658.58 00:15:45.284 clat percentiles (usec): 00:15:45.285 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[12125], 20.00th=[12649], 00:15:45.285 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[15139], 00:15:45.285 | 70.00th=[19268], 80.00th=[20579], 90.00th=[22152], 95.00th=[24773], 00:15:45.285 | 99.00th=[28705], 99.50th=[29492], 99.90th=[29492], 99.95th=[30540], 00:15:45.285 | 99.99th=[30540] 00:15:45.285 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:15:45.285 slat (usec): min=11, max=6355, avg=124.88, stdev=598.29 00:15:45.285 clat (usec): min=9970, max=29861, avg=16245.12, stdev=4647.42 00:15:45.285 lat (usec): min=10012, max=29887, avg=16369.99, stdev=4672.41 00:15:45.285 clat percentiles (usec): 00:15:45.285 | 1.00th=[10814], 5.00th=[11207], 10.00th=[11469], 20.00th=[12125], 00:15:45.285 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[15139], 00:15:45.285 | 70.00th=[18220], 80.00th=[21627], 90.00th=[22676], 95.00th=[23462], 00:15:45.285 | 99.00th=[29230], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:15:45.285 | 99.99th=[29754] 00:15:45.285 bw ( KiB/s): min=12288, max=20480, per=28.84%, avg=16384.00, stdev=5792.62, samples=2 00:15:45.285 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:45.285 lat (msec) : 2=0.06%, 10=0.78%, 20=73.67%, 50=25.49% 00:15:45.285 cpu : usr=4.50%, sys=11.19%, ctx=522, majf=0, minf=5 00:15:45.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:45.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.285 issued rwts: total=3755,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.285 00:15:45.285 Run status group 0 (all jobs): 00:15:45.285 READ: bw=49.6MiB/s (52.0MB/s), 9249KiB/s-16.1MiB/s (9470kB/s-16.9MB/s), io=49.9MiB (52.3MB), run=1002-1006msec 00:15:45.285 WRITE: bw=55.5MiB/s (58.2MB/s), 9.94MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=55.8MiB (58.5MB), run=1002-1006msec 00:15:45.285 00:15:45.285 Disk stats (read/write): 00:15:45.285 nvme0n1: ios=2098/2167, merge=0/0, ticks=15837/18217, in_queue=34054, util=89.39% 00:15:45.285 nvme0n2: ios=2455/2560, merge=0/0, ticks=17160/16489, in_queue=33649, util=89.91% 00:15:45.285 nvme0n3: ios=3617/4096, merge=0/0, ticks=11886/12646, in_queue=24532, util=89.20% 00:15:45.285 nvme0n4: ios=3392/3584, merge=0/0, ticks=12711/12208, in_queue=24919, util=89.76% 00:15:45.285 18:10:42 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:45.285 [global] 00:15:45.285 thread=1 00:15:45.285 invalidate=1 00:15:45.285 rw=randwrite 00:15:45.285 time_based=1 00:15:45.285 runtime=1 00:15:45.285 ioengine=libaio 00:15:45.285 direct=1 00:15:45.285 bs=4096 00:15:45.285 iodepth=128 00:15:45.285 norandommap=0 00:15:45.285 numjobs=1 00:15:45.285 00:15:45.285 verify_dump=1 00:15:45.285 verify_backlog=512 00:15:45.285 verify_state_save=0 00:15:45.285 do_verify=1 00:15:45.285 verify=crc32c-intel 00:15:45.285 [job0] 00:15:45.285 filename=/dev/nvme0n1 00:15:45.285 [job1] 00:15:45.285 filename=/dev/nvme0n2 00:15:45.285 [job2] 00:15:45.285 filename=/dev/nvme0n3 00:15:45.285 [job3] 00:15:45.285 filename=/dev/nvme0n4 00:15:45.285 Could not set queue depth (nvme0n1) 00:15:45.285 Could not set queue depth (nvme0n2) 00:15:45.285 Could not set queue depth (nvme0n3) 00:15:45.285 Could not set queue depth (nvme0n4) 00:15:45.285 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.285 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.285 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.285 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.285 fio-3.35 00:15:45.285 Starting 4 threads 00:15:46.697 00:15:46.697 job0: (groupid=0, jobs=1): err= 0: pid=75109: Thu Apr 25 18:10:44 2024 00:15:46.697 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:15:46.697 slat (usec): min=4, max=8903, avg=197.86, stdev=841.93 00:15:46.697 clat (usec): min=14262, max=46794, avg=24564.13, stdev=6203.52 00:15:46.697 lat (usec): min=14276, max=46823, avg=24761.98, stdev=6270.17 00:15:46.697 clat percentiles (usec): 00:15:46.697 | 1.00th=[14353], 5.00th=[14615], 10.00th=[15008], 20.00th=[18220], 00:15:46.697 | 30.00th=[22676], 40.00th=[24249], 50.00th=[25297], 60.00th=[26084], 00:15:46.697 | 70.00th=[26870], 80.00th=[28967], 90.00th=[32375], 95.00th=[34341], 00:15:46.697 | 99.00th=[41681], 99.50th=[42730], 99.90th=[45876], 99.95th=[46924], 00:15:46.697 | 99.99th=[46924] 00:15:46.697 write: IOPS=2735, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1011msec); 0 zone resets 00:15:46.697 slat (usec): min=4, max=8818, avg=171.20, stdev=645.48 00:15:46.697 clat (usec): min=8511, max=42805, avg=23449.64, stdev=6627.43 00:15:46.697 lat (usec): min=8540, max=43910, avg=23620.85, stdev=6675.08 00:15:46.697 clat percentiles (usec): 00:15:46.697 | 1.00th=[13566], 5.00th=[13960], 10.00th=[14484], 20.00th=[15139], 00:15:46.697 | 30.00th=[19006], 40.00th=[22414], 50.00th=[25297], 60.00th=[26346], 00:15:46.697 | 70.00th=[27919], 80.00th=[28967], 90.00th=[30802], 95.00th=[32637], 00:15:46.697 | 99.00th=[38536], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:15:46.697 | 99.99th=[42730] 00:15:46.697 bw ( KiB/s): min= 8816, max=12288, per=25.37%, avg=10552.00, stdev=2455.07, samples=2 00:15:46.697 iops : min= 2204, max= 3072, avg=2638.00, stdev=613.77, samples=2 00:15:46.697 lat (msec) : 10=0.24%, 20=27.11%, 50=72.64% 00:15:46.697 cpu : usr=2.18%, sys=7.82%, ctx=837, majf=0, minf=11 00:15:46.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:46.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.697 issued rwts: total=2560,2766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.697 job1: (groupid=0, jobs=1): err= 0: pid=75110: Thu Apr 25 18:10:44 2024 00:15:46.697 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:15:46.697 slat (usec): min=3, max=10412, avg=218.86, stdev=1022.25 00:15:46.697 clat (usec): min=16598, max=40536, avg=27121.19, stdev=3553.79 00:15:46.697 lat (usec): min=16611, max=40548, avg=27340.05, stdev=3642.17 00:15:46.697 clat percentiles (usec): 00:15:46.697 | 1.00th=[18482], 5.00th=[21890], 10.00th=[23200], 20.00th=[25035], 00:15:46.697 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[27132], 00:15:46.697 | 70.00th=[27395], 80.00th=[30016], 90.00th=[32113], 95.00th=[33817], 00:15:46.697 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[40633], 00:15:46.697 | 99.99th=[40633] 00:15:46.697 write: IOPS=2501, BW=9.77MiB/s (10.2MB/s)(9.86MiB/1009msec); 0 zone resets 00:15:46.697 slat (usec): min=5, max=14501, avg=213.08, stdev=1164.57 00:15:46.697 clat (usec): min=2374, max=46738, avg=28276.05, stdev=4713.56 00:15:46.697 lat (usec): min=13571, max=46764, avg=28489.12, stdev=4826.05 00:15:46.697 clat percentiles (usec): 00:15:46.697 | 1.00th=[14222], 5.00th=[19792], 10.00th=[23462], 20.00th=[25035], 00:15:46.697 | 30.00th=[26084], 40.00th=[27919], 50.00th=[28967], 60.00th=[29492], 00:15:46.697 | 70.00th=[30540], 80.00th=[31851], 90.00th=[32900], 95.00th=[34866], 00:15:46.697 | 99.00th=[40633], 99.50th=[41681], 99.90th=[45876], 99.95th=[46400], 00:15:46.697 | 99.99th=[46924] 00:15:46.697 bw ( KiB/s): min= 9536, max= 9632, per=23.04%, avg=9584.00, stdev=67.88, samples=2 00:15:46.697 iops : min= 2384, max= 2408, avg=2396.00, stdev=16.97, samples=2 00:15:46.697 lat (msec) : 4=0.02%, 20=3.85%, 50=96.13% 00:15:46.697 cpu : usr=2.38%, sys=6.15%, ctx=609, majf=0, minf=11 00:15:46.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:46.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.697 issued rwts: total=2048,2524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.697 job2: (groupid=0, jobs=1): err= 0: pid=75112: Thu Apr 25 18:10:44 2024 00:15:46.697 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:15:46.697 slat (usec): min=3, max=14276, avg=186.83, stdev=848.55 00:15:46.697 clat (usec): min=9428, max=34370, avg=24467.19, stdev=4980.66 00:15:46.697 lat (usec): min=12050, max=34424, avg=24654.02, stdev=5027.74 00:15:46.697 clat percentiles (usec): 00:15:46.698 | 1.00th=[12125], 5.00th=[12649], 10.00th=[15926], 20.00th=[21627], 00:15:46.698 | 30.00th=[23462], 40.00th=[24511], 50.00th=[25560], 60.00th=[26346], 00:15:46.698 | 70.00th=[27395], 80.00th=[28705], 90.00th=[29754], 95.00th=[31065], 00:15:46.698 | 99.00th=[32375], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:15:46.698 | 99.99th=[34341] 00:15:46.698 write: IOPS=2652, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1010msec); 0 zone resets 00:15:46.698 slat (usec): min=5, max=15300, avg=187.36, stdev=797.47 00:15:46.698 clat (usec): min=3344, max=36613, avg=24375.25, stdev=5386.90 00:15:46.698 lat (usec): min=3370, max=36649, avg=24562.60, stdev=5452.04 00:15:46.698 clat percentiles (usec): 00:15:46.698 | 1.00th=[10421], 5.00th=[15664], 10.00th=[16057], 20.00th=[17957], 00:15:46.698 | 30.00th=[22676], 40.00th=[25035], 50.00th=[26346], 60.00th=[27132], 00:15:46.698 | 70.00th=[27657], 80.00th=[28705], 90.00th=[29754], 95.00th=[30802], 00:15:46.698 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:15:46.698 | 99.99th=[36439] 00:15:46.698 bw ( KiB/s): min= 8192, max=12312, per=24.65%, avg=10252.00, stdev=2913.28, samples=2 00:15:46.698 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:15:46.698 lat (msec) : 4=0.11%, 10=0.31%, 20=21.34%, 50=78.24% 00:15:46.698 cpu : usr=2.68%, sys=7.73%, ctx=871, majf=0, minf=11 00:15:46.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:46.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.698 issued rwts: total=2560,2679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.698 job3: (groupid=0, jobs=1): err= 0: pid=75113: Thu Apr 25 18:10:44 2024 00:15:46.698 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:15:46.698 slat (usec): min=4, max=11761, avg=213.40, stdev=1024.84 00:15:46.698 clat (usec): min=15645, max=41072, avg=26657.09, stdev=3887.01 00:15:46.698 lat (usec): min=15657, max=41086, avg=26870.49, stdev=3976.00 00:15:46.698 clat percentiles (usec): 00:15:46.698 | 1.00th=[16909], 5.00th=[19530], 10.00th=[22414], 20.00th=[24773], 00:15:46.698 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:15:46.698 | 70.00th=[27657], 80.00th=[30278], 90.00th=[32113], 95.00th=[33162], 00:15:46.698 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40109], 99.95th=[41157], 00:15:46.698 | 99.99th=[41157] 00:15:46.698 write: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.94MiB/1009msec); 0 zone resets 00:15:46.698 slat (usec): min=5, max=15080, avg=215.99, stdev=1146.02 00:15:46.698 clat (usec): min=2282, max=46928, avg=28378.98, stdev=4497.20 00:15:46.698 lat (usec): min=12717, max=46961, avg=28594.97, stdev=4613.31 00:15:46.698 clat percentiles (usec): 00:15:46.698 | 1.00th=[13173], 5.00th=[20579], 10.00th=[23725], 20.00th=[25297], 00:15:46.698 | 30.00th=[26608], 40.00th=[27919], 50.00th=[28705], 60.00th=[29754], 00:15:46.698 | 70.00th=[30802], 80.00th=[31851], 90.00th=[32375], 95.00th=[34341], 00:15:46.698 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45876], 99.95th=[46400], 00:15:46.698 | 99.99th=[46924] 00:15:46.698 bw ( KiB/s): min= 9098, max=10248, per=23.26%, avg=9673.00, stdev=813.17, samples=2 00:15:46.698 iops : min= 2274, max= 2562, avg=2418.00, stdev=203.65, samples=2 00:15:46.698 lat (msec) : 4=0.02%, 20=4.36%, 50=95.62% 00:15:46.698 cpu : usr=2.18%, sys=6.35%, ctx=586, majf=0, minf=17 00:15:46.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:46.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.698 issued rwts: total=2048,2544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.698 00:15:46.698 Run status group 0 (all jobs): 00:15:46.698 READ: bw=35.6MiB/s (37.3MB/s), 8119KiB/s-9.90MiB/s (8314kB/s-10.4MB/s), io=36.0MiB (37.7MB), run=1009-1011msec 00:15:46.698 WRITE: bw=40.6MiB/s (42.6MB/s), 9.77MiB/s-10.7MiB/s (10.2MB/s-11.2MB/s), io=41.1MiB (43.1MB), run=1009-1011msec 00:15:46.698 00:15:46.698 Disk stats (read/write): 00:15:46.698 nvme0n1: ios=2098/2518, merge=0/0, ticks=17146/19461, in_queue=36607, util=87.86% 00:15:46.698 nvme0n2: ios=1782/2048, merge=0/0, ticks=22960/27375, in_queue=50335, util=87.73% 00:15:46.698 nvme0n3: ios=2048/2401, merge=0/0, ticks=20741/25483, in_queue=46224, util=88.60% 00:15:46.698 nvme0n4: ios=1725/2048, merge=0/0, ticks=22492/27817, in_queue=50309, util=89.45% 00:15:46.698 18:10:44 -- target/fio.sh@55 -- # sync 00:15:46.698 18:10:44 -- target/fio.sh@59 -- # fio_pid=75127 00:15:46.698 18:10:44 -- target/fio.sh@61 -- # sleep 3 00:15:46.698 18:10:44 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:46.698 [global] 00:15:46.698 thread=1 00:15:46.698 invalidate=1 00:15:46.698 rw=read 00:15:46.698 time_based=1 00:15:46.698 runtime=10 00:15:46.698 ioengine=libaio 00:15:46.698 direct=1 00:15:46.698 bs=4096 00:15:46.698 iodepth=1 00:15:46.698 norandommap=1 00:15:46.698 numjobs=1 00:15:46.698 00:15:46.698 [job0] 00:15:46.698 filename=/dev/nvme0n1 00:15:46.698 [job1] 00:15:46.698 filename=/dev/nvme0n2 00:15:46.698 [job2] 00:15:46.698 filename=/dev/nvme0n3 00:15:46.698 [job3] 00:15:46.698 filename=/dev/nvme0n4 00:15:46.698 Could not set queue depth (nvme0n1) 00:15:46.698 Could not set queue depth (nvme0n2) 00:15:46.698 Could not set queue depth (nvme0n3) 00:15:46.698 Could not set queue depth (nvme0n4) 00:15:46.698 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.698 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.698 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.698 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.698 fio-3.35 00:15:46.698 Starting 4 threads 00:15:49.982 18:10:47 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:49.982 fio: pid=75175, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:49.982 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=52666368, buflen=4096 00:15:49.982 18:10:47 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:49.982 fio: pid=75174, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:49.982 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=56590336, buflen=4096 00:15:49.982 18:10:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:49.982 18:10:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:50.241 fio: pid=75172, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.241 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=40820736, buflen=4096 00:15:50.241 18:10:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.241 18:10:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:50.501 fio: pid=75173, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.501 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=45420544, buflen=4096 00:15:50.501 00:15:50.501 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75172: Thu Apr 25 18:10:48 2024 00:15:50.501 read: IOPS=2940, BW=11.5MiB/s (12.0MB/s)(38.9MiB/3389msec) 00:15:50.501 slat (usec): min=11, max=14798, avg=28.62, stdev=240.71 00:15:50.501 clat (usec): min=128, max=2642, avg=308.93, stdev=60.47 00:15:50.501 lat (usec): min=141, max=14986, avg=337.55, stdev=247.23 00:15:50.501 clat percentiles (usec): 00:15:50.501 | 1.00th=[ 147], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 265], 00:15:50.501 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:15:50.501 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 392], 00:15:50.501 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 529], 99.95th=[ 734], 00:15:50.501 | 99.99th=[ 2638] 00:15:50.501 bw ( KiB/s): min=10840, max=13224, per=21.84%, avg=11530.67, stdev=931.84, samples=6 00:15:50.501 iops : min= 2710, max= 3306, avg=2882.67, stdev=232.96, samples=6 00:15:50.501 lat (usec) : 250=10.37%, 500=89.50%, 750=0.08% 00:15:50.501 lat (msec) : 2=0.03%, 4=0.01% 00:15:50.501 cpu : usr=1.24%, sys=5.61%, ctx=9986, majf=0, minf=1 00:15:50.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 issued rwts: total=9967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.501 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75173: Thu Apr 25 18:10:48 2024 00:15:50.501 read: IOPS=3066, BW=12.0MiB/s (12.6MB/s)(43.3MiB/3617msec) 00:15:50.501 slat (usec): min=12, max=16727, avg=24.53, stdev=276.48 00:15:50.501 clat (usec): min=110, max=3211, avg=299.94, stdev=82.90 00:15:50.501 lat (usec): min=142, max=16998, avg=324.47, stdev=287.30 00:15:50.501 clat percentiles (usec): 00:15:50.501 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 169], 20.00th=[ 255], 00:15:50.501 | 30.00th=[ 277], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 322], 00:15:50.501 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 404], 00:15:50.501 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 693], 99.95th=[ 971], 00:15:50.501 | 99.99th=[ 2180] 00:15:50.501 bw ( KiB/s): min=10688, max=12976, per=21.70%, avg=11452.00, stdev=866.60, samples=6 00:15:50.501 iops : min= 2672, max= 3244, avg=2863.00, stdev=216.65, samples=6 00:15:50.501 lat (usec) : 250=18.31%, 500=81.40%, 750=0.20%, 1000=0.04% 00:15:50.501 lat (msec) : 2=0.02%, 4=0.03% 00:15:50.501 cpu : usr=0.91%, sys=4.37%, ctx=11099, majf=0, minf=1 00:15:50.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 issued rwts: total=11090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.501 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75174: Thu Apr 25 18:10:48 2024 00:15:50.501 read: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(54.0MiB/3190msec) 00:15:50.501 slat (usec): min=12, max=10831, avg=17.76, stdev=113.81 00:15:50.501 clat (usec): min=160, max=2328, avg=211.74, stdev=37.09 00:15:50.501 lat (usec): min=175, max=11028, avg=229.50, stdev=119.80 00:15:50.501 clat percentiles (usec): 00:15:50.501 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:15:50.501 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:15:50.501 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 260], 00:15:50.501 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 383], 99.95th=[ 660], 00:15:50.501 | 99.99th=[ 1909] 00:15:50.501 bw ( KiB/s): min=16624, max=17728, per=32.81%, avg=17316.00, stdev=415.01, samples=6 00:15:50.501 iops : min= 4156, max= 4432, avg=4329.00, stdev=103.75, samples=6 00:15:50.501 lat (usec) : 250=92.49%, 500=7.44%, 750=0.03%, 1000=0.01% 00:15:50.501 lat (msec) : 2=0.02%, 4=0.01% 00:15:50.501 cpu : usr=1.00%, sys=5.61%, ctx=13826, majf=0, minf=1 00:15:50.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 issued rwts: total=13817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.501 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75175: Thu Apr 25 18:10:48 2024 00:15:50.501 read: IOPS=4384, BW=17.1MiB/s (18.0MB/s)(50.2MiB/2933msec) 00:15:50.501 slat (nsec): min=11672, max=81777, avg=16536.82, stdev=5296.72 00:15:50.501 clat (usec): min=139, max=2509, avg=210.07, stdev=49.62 00:15:50.501 lat (usec): min=152, max=2525, avg=226.61, stdev=50.48 00:15:50.501 clat percentiles (usec): 00:15:50.501 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 188], 00:15:50.501 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:15:50.501 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 258], 00:15:50.501 | 99.00th=[ 302], 99.50th=[ 347], 99.90th=[ 611], 99.95th=[ 988], 00:15:50.501 | 99.99th=[ 2245] 00:15:50.501 bw ( KiB/s): min=16504, max=17728, per=32.39%, avg=17096.00, stdev=448.00, samples=5 00:15:50.501 iops : min= 4126, max= 4432, avg=4274.00, stdev=112.00, samples=5 00:15:50.501 lat (usec) : 250=92.71%, 500=7.12%, 750=0.09%, 1000=0.02% 00:15:50.501 lat (msec) : 2=0.02%, 4=0.02% 00:15:50.501 cpu : usr=1.09%, sys=5.97%, ctx=12860, majf=0, minf=1 00:15:50.501 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.501 issued rwts: total=12859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.501 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.501 00:15:50.501 Run status group 0 (all jobs): 00:15:50.501 READ: bw=51.5MiB/s (54.0MB/s), 11.5MiB/s-17.1MiB/s (12.0MB/s-18.0MB/s), io=186MiB (195MB), run=2933-3617msec 00:15:50.501 00:15:50.501 Disk stats (read/write): 00:15:50.501 nvme0n1: ios=9907/0, merge=0/0, ticks=3137/0, in_queue=3137, util=95.14% 00:15:50.501 nvme0n2: ios=9840/0, merge=0/0, ticks=3196/0, in_queue=3196, util=95.27% 00:15:50.501 nvme0n3: ios=13491/0, merge=0/0, ticks=2930/0, in_queue=2930, util=96.21% 00:15:50.501 nvme0n4: ios=12532/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.76% 00:15:50.501 18:10:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.501 18:10:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:50.760 18:10:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.760 18:10:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:50.760 18:10:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.760 18:10:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:51.019 18:10:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.020 18:10:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:51.279 18:10:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.279 18:10:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:51.537 18:10:49 -- target/fio.sh@69 -- # fio_status=0 00:15:51.537 18:10:49 -- target/fio.sh@70 -- # wait 75127 00:15:51.537 18:10:49 -- target/fio.sh@70 -- # fio_status=4 00:15:51.537 18:10:49 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.537 18:10:49 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.537 18:10:49 -- common/autotest_common.sh@1198 -- # local i=0 00:15:51.537 18:10:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:51.537 18:10:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.537 18:10:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:51.537 18:10:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.796 nvmf hotplug test: fio failed as expected 00:15:51.796 18:10:49 -- common/autotest_common.sh@1210 -- # return 0 00:15:51.796 18:10:49 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:51.796 18:10:49 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:51.796 18:10:49 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.055 18:10:49 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:52.055 18:10:49 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:52.055 18:10:49 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:52.055 18:10:49 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:52.055 18:10:49 -- target/fio.sh@91 -- # nvmftestfini 00:15:52.055 18:10:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:52.055 18:10:49 -- nvmf/common.sh@116 -- # sync 00:15:52.055 18:10:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.055 18:10:49 -- nvmf/common.sh@119 -- # set +e 00:15:52.055 18:10:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.055 18:10:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.055 rmmod nvme_tcp 00:15:52.055 rmmod nvme_fabrics 00:15:52.055 rmmod nvme_keyring 00:15:52.055 18:10:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.055 18:10:49 -- nvmf/common.sh@123 -- # set -e 00:15:52.055 18:10:49 -- nvmf/common.sh@124 -- # return 0 00:15:52.055 18:10:49 -- nvmf/common.sh@477 -- # '[' -n 74634 ']' 00:15:52.055 18:10:49 -- nvmf/common.sh@478 -- # killprocess 74634 00:15:52.055 18:10:49 -- common/autotest_common.sh@926 -- # '[' -z 74634 ']' 00:15:52.055 18:10:49 -- common/autotest_common.sh@930 -- # kill -0 74634 00:15:52.055 18:10:49 -- common/autotest_common.sh@931 -- # uname 00:15:52.055 18:10:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:52.055 18:10:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74634 00:15:52.055 killing process with pid 74634 00:15:52.055 18:10:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:52.055 18:10:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:52.055 18:10:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74634' 00:15:52.055 18:10:49 -- common/autotest_common.sh@945 -- # kill 74634 00:15:52.055 18:10:49 -- common/autotest_common.sh@950 -- # wait 74634 00:15:52.313 18:10:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.313 18:10:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:52.313 18:10:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:52.313 18:10:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.313 18:10:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:52.313 18:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.313 18:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.313 18:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.313 18:10:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:52.313 00:15:52.313 real 0m19.101s 00:15:52.313 user 1m12.102s 00:15:52.313 sys 0m8.815s 00:15:52.313 18:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.313 ************************************ 00:15:52.313 END TEST nvmf_fio_target 00:15:52.313 ************************************ 00:15:52.313 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.313 18:10:50 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:52.313 18:10:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:52.313 18:10:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.313 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.313 ************************************ 00:15:52.313 START TEST nvmf_bdevio 00:15:52.313 ************************************ 00:15:52.313 18:10:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:52.572 * Looking for test storage... 00:15:52.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.572 18:10:50 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.572 18:10:50 -- nvmf/common.sh@7 -- # uname -s 00:15:52.572 18:10:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.572 18:10:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.572 18:10:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.572 18:10:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.572 18:10:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.572 18:10:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.572 18:10:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.572 18:10:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.572 18:10:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.572 18:10:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.572 18:10:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:52.572 18:10:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:52.572 18:10:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.572 18:10:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.572 18:10:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.572 18:10:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.572 18:10:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.572 18:10:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.572 18:10:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.573 18:10:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.573 18:10:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.573 18:10:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.573 18:10:50 -- paths/export.sh@5 -- # export PATH 00:15:52.573 18:10:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.573 18:10:50 -- nvmf/common.sh@46 -- # : 0 00:15:52.573 18:10:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:52.573 18:10:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:52.573 18:10:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:52.573 18:10:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.573 18:10:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.573 18:10:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:52.573 18:10:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:52.573 18:10:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:52.573 18:10:50 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.573 18:10:50 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.573 18:10:50 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:52.573 18:10:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:52.573 18:10:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.573 18:10:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:52.573 18:10:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:52.573 18:10:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:52.573 18:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.573 18:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.573 18:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.573 18:10:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:52.573 18:10:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:52.573 18:10:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:52.573 18:10:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:52.573 18:10:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:52.573 18:10:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:52.573 18:10:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.573 18:10:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.573 18:10:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.573 18:10:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:52.573 18:10:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.573 18:10:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.573 18:10:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.573 18:10:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.573 18:10:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.573 18:10:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.573 18:10:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.573 18:10:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.573 18:10:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:52.573 18:10:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:52.573 Cannot find device "nvmf_tgt_br" 00:15:52.573 18:10:50 -- nvmf/common.sh@154 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.573 Cannot find device "nvmf_tgt_br2" 00:15:52.573 18:10:50 -- nvmf/common.sh@155 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:52.573 18:10:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:52.573 Cannot find device "nvmf_tgt_br" 00:15:52.573 18:10:50 -- nvmf/common.sh@157 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:52.573 Cannot find device "nvmf_tgt_br2" 00:15:52.573 18:10:50 -- nvmf/common.sh@158 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:52.573 18:10:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:52.573 18:10:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.573 18:10:50 -- nvmf/common.sh@161 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.573 18:10:50 -- nvmf/common.sh@162 -- # true 00:15:52.573 18:10:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.573 18:10:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.573 18:10:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.573 18:10:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.573 18:10:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.573 18:10:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.573 18:10:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.573 18:10:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.573 18:10:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.831 18:10:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:52.832 18:10:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:52.832 18:10:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:52.832 18:10:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:52.832 18:10:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.832 18:10:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.832 18:10:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.832 18:10:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:52.832 18:10:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:52.832 18:10:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.832 18:10:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.832 18:10:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.832 18:10:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.832 18:10:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.832 18:10:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:52.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:52.832 00:15:52.832 --- 10.0.0.2 ping statistics --- 00:15:52.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.832 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:52.832 18:10:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:52.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:52.832 00:15:52.832 --- 10.0.0.3 ping statistics --- 00:15:52.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.832 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:52.832 18:10:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:15:52.832 00:15:52.832 --- 10.0.0.1 ping statistics --- 00:15:52.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.832 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:52.832 18:10:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.832 18:10:50 -- nvmf/common.sh@421 -- # return 0 00:15:52.832 18:10:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.832 18:10:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.832 18:10:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.832 18:10:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.832 18:10:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.832 18:10:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.832 18:10:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.832 18:10:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:52.832 18:10:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.832 18:10:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.832 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.832 18:10:50 -- nvmf/common.sh@469 -- # nvmfpid=75492 00:15:52.832 18:10:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:52.832 18:10:50 -- nvmf/common.sh@470 -- # waitforlisten 75492 00:15:52.832 18:10:50 -- common/autotest_common.sh@819 -- # '[' -z 75492 ']' 00:15:52.832 18:10:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.832 18:10:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.832 18:10:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.832 18:10:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.832 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.832 [2024-04-25 18:10:50.704651] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:52.832 [2024-04-25 18:10:50.704713] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.090 [2024-04-25 18:10:50.838808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.090 [2024-04-25 18:10:50.952300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.090 [2024-04-25 18:10:50.952453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.090 [2024-04-25 18:10:50.952467] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.090 [2024-04-25 18:10:50.952476] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.090 [2024-04-25 18:10:50.952595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:53.090 [2024-04-25 18:10:50.952754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:53.090 [2024-04-25 18:10:50.952887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:53.090 [2024-04-25 18:10:50.952893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.026 18:10:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:54.027 18:10:51 -- common/autotest_common.sh@852 -- # return 0 00:15:54.027 18:10:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.027 18:10:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 18:10:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.027 18:10:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.027 18:10:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 [2024-04-25 18:10:51.727987] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.027 18:10:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.027 18:10:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:54.027 18:10:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 Malloc0 00:15:54.027 18:10:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.027 18:10:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.027 18:10:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 18:10:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.027 18:10:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.027 18:10:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 18:10:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.027 18:10:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.027 18:10:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:54.027 18:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.027 [2024-04-25 18:10:51.817815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.027 18:10:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.027 18:10:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:54.027 18:10:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:54.027 18:10:51 -- nvmf/common.sh@520 -- # config=() 00:15:54.027 18:10:51 -- nvmf/common.sh@520 -- # local subsystem config 00:15:54.027 18:10:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:54.027 18:10:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:54.027 { 00:15:54.027 "params": { 00:15:54.027 "name": "Nvme$subsystem", 00:15:54.027 "trtype": "$TEST_TRANSPORT", 00:15:54.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.027 "adrfam": "ipv4", 00:15:54.027 "trsvcid": "$NVMF_PORT", 00:15:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.027 "hdgst": ${hdgst:-false}, 00:15:54.027 "ddgst": ${ddgst:-false} 00:15:54.027 }, 00:15:54.027 "method": "bdev_nvme_attach_controller" 00:15:54.027 } 00:15:54.027 EOF 00:15:54.027 )") 00:15:54.027 18:10:51 -- nvmf/common.sh@542 -- # cat 00:15:54.027 18:10:51 -- nvmf/common.sh@544 -- # jq . 00:15:54.027 18:10:51 -- nvmf/common.sh@545 -- # IFS=, 00:15:54.027 18:10:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:54.027 "params": { 00:15:54.027 "name": "Nvme1", 00:15:54.027 "trtype": "tcp", 00:15:54.027 "traddr": "10.0.0.2", 00:15:54.027 "adrfam": "ipv4", 00:15:54.027 "trsvcid": "4420", 00:15:54.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.027 "hdgst": false, 00:15:54.027 "ddgst": false 00:15:54.027 }, 00:15:54.027 "method": "bdev_nvme_attach_controller" 00:15:54.027 }' 00:15:54.027 [2024-04-25 18:10:51.879967] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:54.027 [2024-04-25 18:10:51.880052] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75546 ] 00:15:54.286 [2024-04-25 18:10:52.021500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.286 [2024-04-25 18:10:52.116196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.286 [2024-04-25 18:10:52.116357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.286 [2024-04-25 18:10:52.116360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.544 [2024-04-25 18:10:52.292640] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:54.544 [2024-04-25 18:10:52.292702] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:54.544 I/O targets: 00:15:54.544 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:54.544 00:15:54.544 00:15:54.544 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.544 http://cunit.sourceforge.net/ 00:15:54.544 00:15:54.544 00:15:54.544 Suite: bdevio tests on: Nvme1n1 00:15:54.544 Test: blockdev write read block ...passed 00:15:54.544 Test: blockdev write zeroes read block ...passed 00:15:54.544 Test: blockdev write zeroes read no split ...passed 00:15:54.544 Test: blockdev write zeroes read split ...passed 00:15:54.544 Test: blockdev write zeroes read split partial ...passed 00:15:54.544 Test: blockdev reset ...[2024-04-25 18:10:52.406185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:54.544 [2024-04-25 18:10:52.406265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd79810 (9): Bad file descriptor 00:15:54.544 passed 00:15:54.544 Test: blockdev write read 8 blocks ...[2024-04-25 18:10:52.420221] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:54.544 passed 00:15:54.544 Test: blockdev write read size > 128k ...passed 00:15:54.544 Test: blockdev write read invalid size ...passed 00:15:54.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.544 Test: blockdev write read max offset ...passed 00:15:54.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.803 Test: blockdev writev readv 8 blocks ...passed 00:15:54.803 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.803 Test: blockdev writev readv block ...passed 00:15:54.803 Test: blockdev writev readv size > 128k ...passed 00:15:54.803 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.803 Test: blockdev comparev and writev ...[2024-04-25 18:10:52.594368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.594419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.594440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.594451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.594757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.594773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.594789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.594799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.595053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.595068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.595083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.595093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.595394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.595414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.595431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.803 [2024-04-25 18:10:52.595440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.803 passed 00:15:54.803 Test: blockdev nvme passthru rw ...passed 00:15:54.803 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.803 Test: blockdev nvme admin passthru ...[2024-04-25 18:10:52.678649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.803 [2024-04-25 18:10:52.678677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.678879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.803 [2024-04-25 18:10:52.678970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.679091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.803 [2024-04-25 18:10:52.679106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.803 [2024-04-25 18:10:52.679229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.803 [2024-04-25 18:10:52.679244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.803 passed 00:15:55.062 Test: blockdev copy ...passed 00:15:55.062 00:15:55.062 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.062 suites 1 1 n/a 0 0 00:15:55.062 tests 23 23 23 0 0 00:15:55.062 asserts 152 152 152 0 n/a 00:15:55.062 00:15:55.062 Elapsed time = 0.886 seconds 00:15:55.062 18:10:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.062 18:10:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.062 18:10:52 -- common/autotest_common.sh@10 -- # set +x 00:15:55.062 18:10:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.062 18:10:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:55.062 18:10:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:55.062 18:10:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:55.062 18:10:52 -- nvmf/common.sh@116 -- # sync 00:15:55.062 18:10:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:55.062 18:10:52 -- nvmf/common.sh@119 -- # set +e 00:15:55.062 18:10:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:55.062 18:10:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:55.062 rmmod nvme_tcp 00:15:55.324 rmmod nvme_fabrics 00:15:55.324 rmmod nvme_keyring 00:15:55.324 18:10:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:55.324 18:10:53 -- nvmf/common.sh@123 -- # set -e 00:15:55.324 18:10:53 -- nvmf/common.sh@124 -- # return 0 00:15:55.324 18:10:53 -- nvmf/common.sh@477 -- # '[' -n 75492 ']' 00:15:55.324 18:10:53 -- nvmf/common.sh@478 -- # killprocess 75492 00:15:55.324 18:10:53 -- common/autotest_common.sh@926 -- # '[' -z 75492 ']' 00:15:55.324 18:10:53 -- common/autotest_common.sh@930 -- # kill -0 75492 00:15:55.324 18:10:53 -- common/autotest_common.sh@931 -- # uname 00:15:55.324 18:10:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.324 18:10:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75492 00:15:55.324 killing process with pid 75492 00:15:55.324 18:10:53 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:15:55.324 18:10:53 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:15:55.324 18:10:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75492' 00:15:55.324 18:10:53 -- common/autotest_common.sh@945 -- # kill 75492 00:15:55.324 18:10:53 -- common/autotest_common.sh@950 -- # wait 75492 00:15:55.583 18:10:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.583 18:10:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:55.583 18:10:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:55.583 18:10:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.583 18:10:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:55.583 18:10:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.583 18:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.583 18:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.583 18:10:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:55.583 00:15:55.583 real 0m3.163s 00:15:55.583 user 0m11.360s 00:15:55.583 sys 0m0.831s 00:15:55.583 18:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.583 18:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.583 ************************************ 00:15:55.583 END TEST nvmf_bdevio 00:15:55.583 ************************************ 00:15:55.583 18:10:53 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:15:55.583 18:10:53 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:55.583 18:10:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:55.583 18:10:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.583 18:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.583 ************************************ 00:15:55.583 START TEST nvmf_bdevio_no_huge 00:15:55.583 ************************************ 00:15:55.583 18:10:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:55.583 * Looking for test storage... 00:15:55.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.583 18:10:53 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.583 18:10:53 -- nvmf/common.sh@7 -- # uname -s 00:15:55.583 18:10:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.583 18:10:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.583 18:10:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.583 18:10:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.583 18:10:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.583 18:10:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.583 18:10:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.583 18:10:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.584 18:10:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.584 18:10:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.584 18:10:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:55.584 18:10:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:55.584 18:10:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.584 18:10:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.584 18:10:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.584 18:10:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.584 18:10:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.584 18:10:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.584 18:10:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.584 18:10:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.584 18:10:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.584 18:10:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.584 18:10:53 -- paths/export.sh@5 -- # export PATH 00:15:55.584 18:10:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.584 18:10:53 -- nvmf/common.sh@46 -- # : 0 00:15:55.584 18:10:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:55.584 18:10:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:55.584 18:10:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:55.584 18:10:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.584 18:10:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.584 18:10:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:55.584 18:10:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:55.584 18:10:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:55.584 18:10:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.584 18:10:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.584 18:10:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:55.584 18:10:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:55.584 18:10:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.584 18:10:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:55.584 18:10:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:55.584 18:10:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:55.584 18:10:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.584 18:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.584 18:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.584 18:10:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:55.584 18:10:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:55.584 18:10:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:55.584 18:10:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:55.584 18:10:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:55.584 18:10:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:55.584 18:10:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.584 18:10:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.584 18:10:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:55.584 18:10:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:55.584 18:10:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.584 18:10:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.584 18:10:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.584 18:10:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.584 18:10:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.584 18:10:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.584 18:10:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.584 18:10:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.584 18:10:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:55.843 18:10:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:55.843 Cannot find device "nvmf_tgt_br" 00:15:55.843 18:10:53 -- nvmf/common.sh@154 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.843 Cannot find device "nvmf_tgt_br2" 00:15:55.843 18:10:53 -- nvmf/common.sh@155 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:55.843 18:10:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:55.843 Cannot find device "nvmf_tgt_br" 00:15:55.843 18:10:53 -- nvmf/common.sh@157 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:55.843 Cannot find device "nvmf_tgt_br2" 00:15:55.843 18:10:53 -- nvmf/common.sh@158 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:55.843 18:10:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:55.843 18:10:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.843 18:10:53 -- nvmf/common.sh@161 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.843 18:10:53 -- nvmf/common.sh@162 -- # true 00:15:55.843 18:10:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.843 18:10:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.843 18:10:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.843 18:10:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.843 18:10:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.843 18:10:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.843 18:10:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.843 18:10:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.843 18:10:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:55.843 18:10:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:55.843 18:10:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:55.843 18:10:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:55.843 18:10:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:55.843 18:10:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.843 18:10:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.843 18:10:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.843 18:10:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:55.843 18:10:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:55.843 18:10:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.843 18:10:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.843 18:10:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.101 18:10:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.101 18:10:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.101 18:10:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:56.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:56.101 00:15:56.101 --- 10.0.0.2 ping statistics --- 00:15:56.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.101 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:56.101 18:10:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:56.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:56.101 00:15:56.101 --- 10.0.0.3 ping statistics --- 00:15:56.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.101 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:56.101 18:10:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:56.101 00:15:56.101 --- 10.0.0.1 ping statistics --- 00:15:56.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.101 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:56.101 18:10:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.101 18:10:53 -- nvmf/common.sh@421 -- # return 0 00:15:56.101 18:10:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:56.101 18:10:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.101 18:10:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:56.101 18:10:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:56.101 18:10:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.101 18:10:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:56.101 18:10:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:56.101 18:10:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:56.101 18:10:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:56.101 18:10:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:56.101 18:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.101 18:10:53 -- nvmf/common.sh@469 -- # nvmfpid=75726 00:15:56.101 18:10:53 -- nvmf/common.sh@470 -- # waitforlisten 75726 00:15:56.101 18:10:53 -- common/autotest_common.sh@819 -- # '[' -z 75726 ']' 00:15:56.101 18:10:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:56.101 18:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.101 18:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.101 18:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.101 18:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.101 18:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.101 [2024-04-25 18:10:53.896968] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:56.101 [2024-04-25 18:10:53.897060] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:56.360 [2024-04-25 18:10:54.045658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.360 [2024-04-25 18:10:54.152231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.360 [2024-04-25 18:10:54.152637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.360 [2024-04-25 18:10:54.152702] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.360 [2024-04-25 18:10:54.152825] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.360 [2024-04-25 18:10:54.153021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:56.360 [2024-04-25 18:10:54.153194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:56.360 [2024-04-25 18:10:54.153420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:56.360 [2024-04-25 18:10:54.153424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.973 18:10:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:56.973 18:10:54 -- common/autotest_common.sh@852 -- # return 0 00:15:56.973 18:10:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:56.973 18:10:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:56.973 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:56.973 18:10:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.973 18:10:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.973 18:10:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.973 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:57.231 [2024-04-25 18:10:54.909634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.232 18:10:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.232 18:10:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.232 18:10:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.232 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:57.232 Malloc0 00:15:57.232 18:10:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.232 18:10:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.232 18:10:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.232 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:57.232 18:10:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.232 18:10:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.232 18:10:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.232 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:57.232 18:10:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.232 18:10:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.232 18:10:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.232 18:10:54 -- common/autotest_common.sh@10 -- # set +x 00:15:57.232 [2024-04-25 18:10:54.947529] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.232 18:10:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.232 18:10:54 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:57.232 18:10:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:57.232 18:10:54 -- nvmf/common.sh@520 -- # config=() 00:15:57.232 18:10:54 -- nvmf/common.sh@520 -- # local subsystem config 00:15:57.232 18:10:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:57.232 18:10:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:57.232 { 00:15:57.232 "params": { 00:15:57.232 "name": "Nvme$subsystem", 00:15:57.232 "trtype": "$TEST_TRANSPORT", 00:15:57.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.232 "adrfam": "ipv4", 00:15:57.232 "trsvcid": "$NVMF_PORT", 00:15:57.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.232 "hdgst": ${hdgst:-false}, 00:15:57.232 "ddgst": ${ddgst:-false} 00:15:57.232 }, 00:15:57.232 "method": "bdev_nvme_attach_controller" 00:15:57.232 } 00:15:57.232 EOF 00:15:57.232 )") 00:15:57.232 18:10:54 -- nvmf/common.sh@542 -- # cat 00:15:57.232 18:10:54 -- nvmf/common.sh@544 -- # jq . 00:15:57.232 18:10:54 -- nvmf/common.sh@545 -- # IFS=, 00:15:57.232 18:10:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:57.232 "params": { 00:15:57.232 "name": "Nvme1", 00:15:57.232 "trtype": "tcp", 00:15:57.232 "traddr": "10.0.0.2", 00:15:57.232 "adrfam": "ipv4", 00:15:57.232 "trsvcid": "4420", 00:15:57.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.232 "hdgst": false, 00:15:57.232 "ddgst": false 00:15:57.232 }, 00:15:57.232 "method": "bdev_nvme_attach_controller" 00:15:57.232 }' 00:15:57.232 [2024-04-25 18:10:55.008118] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:57.232 [2024-04-25 18:10:55.008213] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75780 ] 00:15:57.232 [2024-04-25 18:10:55.152512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.490 [2024-04-25 18:10:55.304172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.490 [2024-04-25 18:10:55.304310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.490 [2024-04-25 18:10:55.304315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.747 [2024-04-25 18:10:55.501335] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:57.747 [2024-04-25 18:10:55.501379] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:57.747 I/O targets: 00:15:57.747 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:57.747 00:15:57.747 00:15:57.747 CUnit - A unit testing framework for C - Version 2.1-3 00:15:57.747 http://cunit.sourceforge.net/ 00:15:57.747 00:15:57.747 00:15:57.747 Suite: bdevio tests on: Nvme1n1 00:15:57.747 Test: blockdev write read block ...passed 00:15:57.747 Test: blockdev write zeroes read block ...passed 00:15:57.747 Test: blockdev write zeroes read no split ...passed 00:15:57.747 Test: blockdev write zeroes read split ...passed 00:15:57.747 Test: blockdev write zeroes read split partial ...passed 00:15:57.747 Test: blockdev reset ...[2024-04-25 18:10:55.628128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:57.747 [2024-04-25 18:10:55.628228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0eba0 (9): Bad file descriptor 00:15:57.747 [2024-04-25 18:10:55.640446] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:57.747 passed 00:15:57.747 Test: blockdev write read 8 blocks ...passed 00:15:57.747 Test: blockdev write read size > 128k ...passed 00:15:57.747 Test: blockdev write read invalid size ...passed 00:15:58.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.005 Test: blockdev write read max offset ...passed 00:15:58.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.005 Test: blockdev writev readv 8 blocks ...passed 00:15:58.005 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.005 Test: blockdev writev readv block ...passed 00:15:58.005 Test: blockdev writev readv size > 128k ...passed 00:15:58.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.005 Test: blockdev comparev and writev ...[2024-04-25 18:10:55.814030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.814102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:58.005 [2024-04-25 18:10:55.814139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.814150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:58.005 [2024-04-25 18:10:55.814516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.814544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:58.005 [2024-04-25 18:10:55.814564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.814574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:58.005 [2024-04-25 18:10:55.814948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.814981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:58.005 [2024-04-25 18:10:55.815000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.005 [2024-04-25 18:10:55.815010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:58.006 [2024-04-25 18:10:55.815378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.006 [2024-04-25 18:10:55.815410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:58.006 [2024-04-25 18:10:55.815429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.006 [2024-04-25 18:10:55.815440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:58.006 passed 00:15:58.006 Test: blockdev nvme passthru rw ...passed 00:15:58.006 Test: blockdev nvme passthru vendor specific ...[2024-04-25 18:10:55.899551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.006 [2024-04-25 18:10:55.899579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:58.006 [2024-04-25 18:10:55.899709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.006 [2024-04-25 18:10:55.899726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:58.006 [2024-04-25 18:10:55.899838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.006 [2024-04-25 18:10:55.899853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:58.006 [2024-04-25 18:10:55.899965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.006 [2024-04-25 18:10:55.899980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:58.006 passed 00:15:58.006 Test: blockdev nvme admin passthru ...passed 00:15:58.264 Test: blockdev copy ...passed 00:15:58.264 00:15:58.264 Run Summary: Type Total Ran Passed Failed Inactive 00:15:58.264 suites 1 1 n/a 0 0 00:15:58.265 tests 23 23 23 0 0 00:15:58.265 asserts 152 152 152 0 n/a 00:15:58.265 00:15:58.265 Elapsed time = 0.914 seconds 00:15:58.523 18:10:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.523 18:10:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.523 18:10:56 -- common/autotest_common.sh@10 -- # set +x 00:15:58.523 18:10:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.523 18:10:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:58.523 18:10:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:58.523 18:10:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:58.523 18:10:56 -- nvmf/common.sh@116 -- # sync 00:15:58.523 18:10:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:58.523 18:10:56 -- nvmf/common.sh@119 -- # set +e 00:15:58.523 18:10:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:58.523 18:10:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:58.523 rmmod nvme_tcp 00:15:58.523 rmmod nvme_fabrics 00:15:58.782 rmmod nvme_keyring 00:15:58.782 18:10:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.782 18:10:56 -- nvmf/common.sh@123 -- # set -e 00:15:58.782 18:10:56 -- nvmf/common.sh@124 -- # return 0 00:15:58.782 18:10:56 -- nvmf/common.sh@477 -- # '[' -n 75726 ']' 00:15:58.782 18:10:56 -- nvmf/common.sh@478 -- # killprocess 75726 00:15:58.782 18:10:56 -- common/autotest_common.sh@926 -- # '[' -z 75726 ']' 00:15:58.782 18:10:56 -- common/autotest_common.sh@930 -- # kill -0 75726 00:15:58.782 18:10:56 -- common/autotest_common.sh@931 -- # uname 00:15:58.782 18:10:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:58.782 18:10:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75726 00:15:58.782 killing process with pid 75726 00:15:58.782 18:10:56 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:15:58.782 18:10:56 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:15:58.782 18:10:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75726' 00:15:58.782 18:10:56 -- common/autotest_common.sh@945 -- # kill 75726 00:15:58.782 18:10:56 -- common/autotest_common.sh@950 -- # wait 75726 00:15:59.040 18:10:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.040 18:10:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:59.040 18:10:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:59.040 18:10:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.040 18:10:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:59.040 18:10:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.040 18:10:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.040 18:10:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.040 18:10:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:59.040 00:15:59.040 real 0m3.538s 00:15:59.040 user 0m13.090s 00:15:59.040 sys 0m1.274s 00:15:59.040 18:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.040 ************************************ 00:15:59.040 END TEST nvmf_bdevio_no_huge 00:15:59.040 18:10:56 -- common/autotest_common.sh@10 -- # set +x 00:15:59.040 ************************************ 00:15:59.299 18:10:56 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:59.299 18:10:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:59.299 18:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:59.299 18:10:56 -- common/autotest_common.sh@10 -- # set +x 00:15:59.299 ************************************ 00:15:59.299 START TEST nvmf_tls 00:15:59.299 ************************************ 00:15:59.299 18:10:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:59.299 * Looking for test storage... 00:15:59.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:59.299 18:10:57 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.299 18:10:57 -- nvmf/common.sh@7 -- # uname -s 00:15:59.299 18:10:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.299 18:10:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.299 18:10:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.299 18:10:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.299 18:10:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.299 18:10:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.299 18:10:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.299 18:10:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.299 18:10:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.299 18:10:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:59.299 18:10:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:15:59.299 18:10:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.299 18:10:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.299 18:10:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.299 18:10:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.299 18:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.299 18:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.299 18:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.299 18:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.299 18:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.299 18:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.299 18:10:57 -- paths/export.sh@5 -- # export PATH 00:15:59.299 18:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.299 18:10:57 -- nvmf/common.sh@46 -- # : 0 00:15:59.299 18:10:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.299 18:10:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.299 18:10:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.299 18:10:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.299 18:10:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.299 18:10:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.299 18:10:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.299 18:10:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.299 18:10:57 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.299 18:10:57 -- target/tls.sh@71 -- # nvmftestinit 00:15:59.299 18:10:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:59.299 18:10:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.299 18:10:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.299 18:10:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.299 18:10:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.299 18:10:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.299 18:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.299 18:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.299 18:10:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:59.299 18:10:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:59.299 18:10:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.299 18:10:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.299 18:10:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:59.299 18:10:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:59.299 18:10:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.299 18:10:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.299 18:10:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.299 18:10:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.299 18:10:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.299 18:10:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.299 18:10:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.299 18:10:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.299 18:10:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:59.299 18:10:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:59.299 Cannot find device "nvmf_tgt_br" 00:15:59.299 18:10:57 -- nvmf/common.sh@154 -- # true 00:15:59.299 18:10:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.299 Cannot find device "nvmf_tgt_br2" 00:15:59.299 18:10:57 -- nvmf/common.sh@155 -- # true 00:15:59.299 18:10:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:59.299 18:10:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:59.299 Cannot find device "nvmf_tgt_br" 00:15:59.299 18:10:57 -- nvmf/common.sh@157 -- # true 00:15:59.300 18:10:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:59.300 Cannot find device "nvmf_tgt_br2" 00:15:59.300 18:10:57 -- nvmf/common.sh@158 -- # true 00:15:59.300 18:10:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:59.300 18:10:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:59.300 18:10:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.300 18:10:57 -- nvmf/common.sh@161 -- # true 00:15:59.300 18:10:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.300 18:10:57 -- nvmf/common.sh@162 -- # true 00:15:59.300 18:10:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.300 18:10:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.300 18:10:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.558 18:10:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.559 18:10:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.559 18:10:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.559 18:10:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.559 18:10:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:59.559 18:10:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:59.559 18:10:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:59.559 18:10:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:59.559 18:10:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:59.559 18:10:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:59.559 18:10:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.559 18:10:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.559 18:10:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.559 18:10:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:59.559 18:10:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:59.559 18:10:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.559 18:10:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.559 18:10:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.559 18:10:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.559 18:10:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.559 18:10:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:59.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:59.559 00:15:59.559 --- 10.0.0.2 ping statistics --- 00:15:59.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.559 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:59.559 18:10:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:59.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:59.559 00:15:59.559 --- 10.0.0.3 ping statistics --- 00:15:59.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.559 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:59.559 18:10:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:59.559 00:15:59.559 --- 10.0.0.1 ping statistics --- 00:15:59.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.559 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:59.559 18:10:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.559 18:10:57 -- nvmf/common.sh@421 -- # return 0 00:15:59.559 18:10:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:59.559 18:10:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.559 18:10:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:59.559 18:10:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:59.559 18:10:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.559 18:10:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:59.559 18:10:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:59.559 18:10:57 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:59.559 18:10:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:59.559 18:10:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:59.559 18:10:57 -- common/autotest_common.sh@10 -- # set +x 00:15:59.559 18:10:57 -- nvmf/common.sh@469 -- # nvmfpid=75965 00:15:59.559 18:10:57 -- nvmf/common.sh@470 -- # waitforlisten 75965 00:15:59.559 18:10:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:59.559 18:10:57 -- common/autotest_common.sh@819 -- # '[' -z 75965 ']' 00:15:59.559 18:10:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.559 18:10:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:59.559 18:10:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.559 18:10:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:59.559 18:10:57 -- common/autotest_common.sh@10 -- # set +x 00:15:59.559 [2024-04-25 18:10:57.484129] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:59.559 [2024-04-25 18:10:57.484228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.818 [2024-04-25 18:10:57.624131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.818 [2024-04-25 18:10:57.713258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:59.818 [2024-04-25 18:10:57.713453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.818 [2024-04-25 18:10:57.713471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.818 [2024-04-25 18:10:57.713483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.818 [2024-04-25 18:10:57.713520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.753 18:10:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:00.753 18:10:58 -- common/autotest_common.sh@852 -- # return 0 00:16:00.753 18:10:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.753 18:10:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:00.753 18:10:58 -- common/autotest_common.sh@10 -- # set +x 00:16:00.753 18:10:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.753 18:10:58 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:00.753 18:10:58 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:00.753 true 00:16:00.753 18:10:58 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.753 18:10:58 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:01.011 18:10:58 -- target/tls.sh@82 -- # version=0 00:16:01.011 18:10:58 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:01.011 18:10:58 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:01.269 18:10:59 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.269 18:10:59 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:01.527 18:10:59 -- target/tls.sh@90 -- # version=13 00:16:01.527 18:10:59 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:01.527 18:10:59 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:01.786 18:10:59 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:01.786 18:10:59 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:02.045 18:10:59 -- target/tls.sh@98 -- # version=7 00:16:02.045 18:10:59 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:02.045 18:10:59 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:02.045 18:10:59 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:02.303 18:11:00 -- target/tls.sh@105 -- # ktls=false 00:16:02.303 18:11:00 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:02.304 18:11:00 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:02.562 18:11:00 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:02.562 18:11:00 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:02.821 18:11:00 -- target/tls.sh@113 -- # ktls=true 00:16:02.821 18:11:00 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:02.821 18:11:00 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:03.087 18:11:00 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:03.087 18:11:00 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:03.352 18:11:01 -- target/tls.sh@121 -- # ktls=false 00:16:03.352 18:11:01 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:03.352 18:11:01 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:03.352 18:11:01 -- target/tls.sh@49 -- # local key hash crc 00:16:03.352 18:11:01 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:03.352 18:11:01 -- target/tls.sh@51 -- # hash=01 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # gzip -1 -c 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # tail -c8 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # head -c 4 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # crc='p$H�' 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:03.352 18:11:01 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:03.352 18:11:01 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:03.352 18:11:01 -- target/tls.sh@49 -- # local key hash crc 00:16:03.352 18:11:01 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:03.352 18:11:01 -- target/tls.sh@51 -- # hash=01 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # gzip -1 -c 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # tail -c8 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # head -c 4 00:16:03.352 18:11:01 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:03.352 18:11:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:03.352 18:11:01 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:03.352 18:11:01 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:03.352 18:11:01 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:03.352 18:11:01 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:03.352 18:11:01 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:03.352 18:11:01 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:03.352 18:11:01 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:03.352 18:11:01 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:03.352 18:11:01 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:03.920 18:11:01 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:03.920 18:11:01 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:03.920 18:11:01 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:03.920 [2024-04-25 18:11:01.835620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.259 18:11:01 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:04.259 18:11:02 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:04.519 [2024-04-25 18:11:02.235749] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.519 [2024-04-25 18:11:02.235963] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.519 18:11:02 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:04.519 malloc0 00:16:04.778 18:11:02 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:04.778 18:11:02 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:05.037 18:11:02 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:17.241 Initializing NVMe Controllers 00:16:17.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:17.241 Initialization complete. Launching workers. 00:16:17.241 ======================================================== 00:16:17.241 Latency(us) 00:16:17.241 Device Information : IOPS MiB/s Average min max 00:16:17.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11641.17 45.47 5498.76 827.19 7160.39 00:16:17.241 ======================================================== 00:16:17.241 Total : 11641.17 45.47 5498.76 827.19 7160.39 00:16:17.241 00:16:17.241 18:11:13 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:17.241 18:11:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.241 18:11:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:17.241 18:11:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.241 18:11:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:17.241 18:11:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.241 18:11:13 -- target/tls.sh@28 -- # bdevperf_pid=76322 00:16:17.241 18:11:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.241 18:11:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.241 18:11:13 -- target/tls.sh@31 -- # waitforlisten 76322 /var/tmp/bdevperf.sock 00:16:17.241 18:11:13 -- common/autotest_common.sh@819 -- # '[' -z 76322 ']' 00:16:17.241 18:11:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.241 18:11:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.241 18:11:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.241 18:11:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.241 18:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:17.241 [2024-04-25 18:11:13.083410] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:17.241 [2024-04-25 18:11:13.083521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76322 ] 00:16:17.241 [2024-04-25 18:11:13.222953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.241 [2024-04-25 18:11:13.335508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.241 18:11:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.241 18:11:13 -- common/autotest_common.sh@852 -- # return 0 00:16:17.241 18:11:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:17.241 [2024-04-25 18:11:14.130799] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.241 TLSTESTn1 00:16:17.242 18:11:14 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:17.242 Running I/O for 10 seconds... 00:16:27.214 00:16:27.214 Latency(us) 00:16:27.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.214 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:27.214 Verification LBA range: start 0x0 length 0x2000 00:16:27.214 TLSTESTn1 : 10.01 6498.66 25.39 0.00 0.00 19665.53 3678.95 21567.30 00:16:27.214 =================================================================================================================== 00:16:27.214 Total : 6498.66 25.39 0.00 0.00 19665.53 3678.95 21567.30 00:16:27.214 0 00:16:27.214 18:11:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.214 18:11:24 -- target/tls.sh@45 -- # killprocess 76322 00:16:27.214 18:11:24 -- common/autotest_common.sh@926 -- # '[' -z 76322 ']' 00:16:27.214 18:11:24 -- common/autotest_common.sh@930 -- # kill -0 76322 00:16:27.214 18:11:24 -- common/autotest_common.sh@931 -- # uname 00:16:27.214 18:11:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.214 18:11:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76322 00:16:27.214 killing process with pid 76322 00:16:27.214 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.214 00:16:27.214 Latency(us) 00:16:27.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.214 =================================================================================================================== 00:16:27.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.214 18:11:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:27.214 18:11:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:27.215 18:11:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76322' 00:16:27.215 18:11:24 -- common/autotest_common.sh@945 -- # kill 76322 00:16:27.215 18:11:24 -- common/autotest_common.sh@950 -- # wait 76322 00:16:27.215 18:11:24 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.215 18:11:24 -- common/autotest_common.sh@640 -- # local es=0 00:16:27.215 18:11:24 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.215 18:11:24 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:27.215 18:11:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.215 18:11:24 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:27.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.215 18:11:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.215 18:11:24 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.215 18:11:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.215 18:11:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.215 18:11:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.215 18:11:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:27.215 18:11:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.215 18:11:24 -- target/tls.sh@28 -- # bdevperf_pid=76475 00:16:27.215 18:11:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.215 18:11:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.215 18:11:24 -- target/tls.sh@31 -- # waitforlisten 76475 /var/tmp/bdevperf.sock 00:16:27.215 18:11:24 -- common/autotest_common.sh@819 -- # '[' -z 76475 ']' 00:16:27.215 18:11:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.215 18:11:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.215 18:11:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.215 18:11:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.215 18:11:24 -- common/autotest_common.sh@10 -- # set +x 00:16:27.215 [2024-04-25 18:11:24.647589] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:27.215 [2024-04-25 18:11:24.647686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76475 ] 00:16:27.215 [2024-04-25 18:11:24.785194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.215 [2024-04-25 18:11:24.879664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.782 18:11:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.782 18:11:25 -- common/autotest_common.sh@852 -- # return 0 00:16:27.782 18:11:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:28.041 [2024-04-25 18:11:25.777856] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:28.041 [2024-04-25 18:11:25.786879] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.041 [2024-04-25 18:11:25.787798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecb570 (107): Transport endpoint is not connected 00:16:28.041 [2024-04-25 18:11:25.788785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecb570 (9): Bad file descriptor 00:16:28.041 [2024-04-25 18:11:25.789782] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.041 [2024-04-25 18:11:25.789823] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:28.041 [2024-04-25 18:11:25.789854] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.041 2024/04/25 18:11:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:28.041 request: 00:16:28.041 { 00:16:28.041 "method": "bdev_nvme_attach_controller", 00:16:28.041 "params": { 00:16:28.041 "name": "TLSTEST", 00:16:28.041 "trtype": "tcp", 00:16:28.041 "traddr": "10.0.0.2", 00:16:28.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.041 "adrfam": "ipv4", 00:16:28.041 "trsvcid": "4420", 00:16:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.041 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:28.041 } 00:16:28.041 } 00:16:28.041 Got JSON-RPC error response 00:16:28.041 GoRPCClient: error on JSON-RPC call 00:16:28.041 18:11:25 -- target/tls.sh@36 -- # killprocess 76475 00:16:28.041 18:11:25 -- common/autotest_common.sh@926 -- # '[' -z 76475 ']' 00:16:28.041 18:11:25 -- common/autotest_common.sh@930 -- # kill -0 76475 00:16:28.041 18:11:25 -- common/autotest_common.sh@931 -- # uname 00:16:28.041 18:11:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.041 18:11:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76475 00:16:28.041 killing process with pid 76475 00:16:28.041 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.041 00:16:28.041 Latency(us) 00:16:28.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.041 =================================================================================================================== 00:16:28.041 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.041 18:11:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:28.041 18:11:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:28.041 18:11:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76475' 00:16:28.041 18:11:25 -- common/autotest_common.sh@945 -- # kill 76475 00:16:28.041 18:11:25 -- common/autotest_common.sh@950 -- # wait 76475 00:16:28.300 18:11:26 -- target/tls.sh@37 -- # return 1 00:16:28.300 18:11:26 -- common/autotest_common.sh@643 -- # es=1 00:16:28.300 18:11:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:28.300 18:11:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:28.300 18:11:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:28.300 18:11:26 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:28.300 18:11:26 -- common/autotest_common.sh@640 -- # local es=0 00:16:28.300 18:11:26 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:28.300 18:11:26 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:28.300 18:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:28.300 18:11:26 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:28.300 18:11:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:28.300 18:11:26 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:28.300 18:11:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.300 18:11:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.300 18:11:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:28.300 18:11:26 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:28.300 18:11:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.300 18:11:26 -- target/tls.sh@28 -- # bdevperf_pid=76519 00:16:28.300 18:11:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.300 18:11:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.300 18:11:26 -- target/tls.sh@31 -- # waitforlisten 76519 /var/tmp/bdevperf.sock 00:16:28.300 18:11:26 -- common/autotest_common.sh@819 -- # '[' -z 76519 ']' 00:16:28.300 18:11:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.300 18:11:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:28.300 18:11:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.300 18:11:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:28.300 18:11:26 -- common/autotest_common.sh@10 -- # set +x 00:16:28.300 [2024-04-25 18:11:26.123646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:28.300 [2024-04-25 18:11:26.123762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76519 ] 00:16:28.558 [2024-04-25 18:11:26.259089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.558 [2024-04-25 18:11:26.339796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.125 18:11:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.125 18:11:27 -- common/autotest_common.sh@852 -- # return 0 00:16:29.125 18:11:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.384 [2024-04-25 18:11:27.209456] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.384 [2024-04-25 18:11:27.216951] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:29.384 [2024-04-25 18:11:27.217002] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:29.384 [2024-04-25 18:11:27.217066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:29.384 [2024-04-25 18:11:27.217823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f570 (107): Transport endpoint is not connected 00:16:29.385 [2024-04-25 18:11:27.218810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f570 (9): Bad file descriptor 00:16:29.385 [2024-04-25 18:11:27.219806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:29.385 [2024-04-25 18:11:27.219847] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:29.385 [2024-04-25 18:11:27.219878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:29.385 2024/04/25 18:11:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:29.385 request: 00:16:29.385 { 00:16:29.385 "method": "bdev_nvme_attach_controller", 00:16:29.385 "params": { 00:16:29.385 "name": "TLSTEST", 00:16:29.385 "trtype": "tcp", 00:16:29.385 "traddr": "10.0.0.2", 00:16:29.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:29.385 "adrfam": "ipv4", 00:16:29.385 "trsvcid": "4420", 00:16:29.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.385 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:29.385 } 00:16:29.385 } 00:16:29.385 Got JSON-RPC error response 00:16:29.385 GoRPCClient: error on JSON-RPC call 00:16:29.385 18:11:27 -- target/tls.sh@36 -- # killprocess 76519 00:16:29.385 18:11:27 -- common/autotest_common.sh@926 -- # '[' -z 76519 ']' 00:16:29.385 18:11:27 -- common/autotest_common.sh@930 -- # kill -0 76519 00:16:29.385 18:11:27 -- common/autotest_common.sh@931 -- # uname 00:16:29.385 18:11:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.385 18:11:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76519 00:16:29.385 killing process with pid 76519 00:16:29.385 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.385 00:16:29.385 Latency(us) 00:16:29.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.385 =================================================================================================================== 00:16:29.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:29.385 18:11:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:29.385 18:11:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:29.385 18:11:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76519' 00:16:29.385 18:11:27 -- common/autotest_common.sh@945 -- # kill 76519 00:16:29.385 18:11:27 -- common/autotest_common.sh@950 -- # wait 76519 00:16:29.643 18:11:27 -- target/tls.sh@37 -- # return 1 00:16:29.643 18:11:27 -- common/autotest_common.sh@643 -- # es=1 00:16:29.643 18:11:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:29.643 18:11:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:29.643 18:11:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:29.643 18:11:27 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.643 18:11:27 -- common/autotest_common.sh@640 -- # local es=0 00:16:29.643 18:11:27 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.643 18:11:27 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:29.643 18:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:29.643 18:11:27 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:29.643 18:11:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:29.643 18:11:27 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.643 18:11:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:29.643 18:11:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:29.643 18:11:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:29.643 18:11:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:29.643 18:11:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.643 18:11:27 -- target/tls.sh@28 -- # bdevperf_pid=76560 00:16:29.643 18:11:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.643 18:11:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.643 18:11:27 -- target/tls.sh@31 -- # waitforlisten 76560 /var/tmp/bdevperf.sock 00:16:29.643 18:11:27 -- common/autotest_common.sh@819 -- # '[' -z 76560 ']' 00:16:29.643 18:11:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.643 18:11:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.643 18:11:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.643 18:11:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.643 18:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:29.643 [2024-04-25 18:11:27.548731] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:29.643 [2024-04-25 18:11:27.549384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76560 ] 00:16:29.902 [2024-04-25 18:11:27.685388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.902 [2024-04-25 18:11:27.768154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.839 18:11:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:30.839 18:11:28 -- common/autotest_common.sh@852 -- # return 0 00:16:30.839 18:11:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:30.839 [2024-04-25 18:11:28.630811] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.839 [2024-04-25 18:11:28.637223] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:30.839 [2024-04-25 18:11:28.637284] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:30.839 [2024-04-25 18:11:28.637338] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:30.839 [2024-04-25 18:11:28.637813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88b570 (107): Transport endpoint is not connected 00:16:30.839 [2024-04-25 18:11:28.638796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88b570 (9): Bad file descriptor 00:16:30.839 [2024-04-25 18:11:28.639794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:30.839 [2024-04-25 18:11:28.639835] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:30.839 [2024-04-25 18:11:28.639866] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:30.839 2024/04/25 18:11:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:30.839 request: 00:16:30.839 { 00:16:30.839 "method": "bdev_nvme_attach_controller", 00:16:30.839 "params": { 00:16:30.839 "name": "TLSTEST", 00:16:30.839 "trtype": "tcp", 00:16:30.839 "traddr": "10.0.0.2", 00:16:30.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.839 "adrfam": "ipv4", 00:16:30.839 "trsvcid": "4420", 00:16:30.839 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.839 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:30.839 } 00:16:30.839 } 00:16:30.839 Got JSON-RPC error response 00:16:30.839 GoRPCClient: error on JSON-RPC call 00:16:30.839 18:11:28 -- target/tls.sh@36 -- # killprocess 76560 00:16:30.839 18:11:28 -- common/autotest_common.sh@926 -- # '[' -z 76560 ']' 00:16:30.839 18:11:28 -- common/autotest_common.sh@930 -- # kill -0 76560 00:16:30.839 18:11:28 -- common/autotest_common.sh@931 -- # uname 00:16:30.839 18:11:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.839 18:11:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76560 00:16:30.839 killing process with pid 76560 00:16:30.839 Received shutdown signal, test time was about 10.000000 seconds 00:16:30.839 00:16:30.839 Latency(us) 00:16:30.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.839 =================================================================================================================== 00:16:30.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:30.839 18:11:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:30.839 18:11:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:30.839 18:11:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76560' 00:16:30.839 18:11:28 -- common/autotest_common.sh@945 -- # kill 76560 00:16:30.839 18:11:28 -- common/autotest_common.sh@950 -- # wait 76560 00:16:31.098 18:11:28 -- target/tls.sh@37 -- # return 1 00:16:31.098 18:11:28 -- common/autotest_common.sh@643 -- # es=1 00:16:31.098 18:11:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:31.098 18:11:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:31.098 18:11:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:31.098 18:11:28 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:31.098 18:11:28 -- common/autotest_common.sh@640 -- # local es=0 00:16:31.098 18:11:28 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:31.098 18:11:28 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:31.098 18:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.098 18:11:28 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:31.098 18:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.098 18:11:28 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:31.098 18:11:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:31.098 18:11:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:31.098 18:11:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:31.098 18:11:28 -- target/tls.sh@23 -- # psk= 00:16:31.098 18:11:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.098 18:11:28 -- target/tls.sh@28 -- # bdevperf_pid=76606 00:16:31.098 18:11:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.098 18:11:28 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.098 18:11:28 -- target/tls.sh@31 -- # waitforlisten 76606 /var/tmp/bdevperf.sock 00:16:31.098 18:11:28 -- common/autotest_common.sh@819 -- # '[' -z 76606 ']' 00:16:31.098 18:11:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.098 18:11:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.098 18:11:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.098 18:11:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.098 18:11:28 -- common/autotest_common.sh@10 -- # set +x 00:16:31.098 [2024-04-25 18:11:28.965890] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:31.098 [2024-04-25 18:11:28.966502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76606 ] 00:16:31.366 [2024-04-25 18:11:29.105277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.366 [2024-04-25 18:11:29.194895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.314 18:11:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.314 18:11:29 -- common/autotest_common.sh@852 -- # return 0 00:16:32.314 18:11:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:32.314 [2024-04-25 18:11:30.197408] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:32.314 [2024-04-25 18:11:30.199352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1045170 (9): Bad file descriptor 00:16:32.314 [2024-04-25 18:11:30.200364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:32.314 [2024-04-25 18:11:30.200442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:32.314 [2024-04-25 18:11:30.200461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:32.314 2024/04/25 18:11:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:32.314 request: 00:16:32.314 { 00:16:32.314 "method": "bdev_nvme_attach_controller", 00:16:32.314 "params": { 00:16:32.314 "name": "TLSTEST", 00:16:32.314 "trtype": "tcp", 00:16:32.314 "traddr": "10.0.0.2", 00:16:32.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.314 "adrfam": "ipv4", 00:16:32.314 "trsvcid": "4420", 00:16:32.314 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:32.314 } 00:16:32.314 } 00:16:32.314 Got JSON-RPC error response 00:16:32.314 GoRPCClient: error on JSON-RPC call 00:16:32.314 18:11:30 -- target/tls.sh@36 -- # killprocess 76606 00:16:32.314 18:11:30 -- common/autotest_common.sh@926 -- # '[' -z 76606 ']' 00:16:32.314 18:11:30 -- common/autotest_common.sh@930 -- # kill -0 76606 00:16:32.314 18:11:30 -- common/autotest_common.sh@931 -- # uname 00:16:32.314 18:11:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:32.314 18:11:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76606 00:16:32.574 18:11:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:32.574 18:11:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:32.574 18:11:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76606' 00:16:32.574 killing process with pid 76606 00:16:32.574 18:11:30 -- common/autotest_common.sh@945 -- # kill 76606 00:16:32.574 18:11:30 -- common/autotest_common.sh@950 -- # wait 76606 00:16:32.574 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.574 00:16:32.574 Latency(us) 00:16:32.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.574 =================================================================================================================== 00:16:32.574 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.574 18:11:30 -- target/tls.sh@37 -- # return 1 00:16:32.574 18:11:30 -- common/autotest_common.sh@643 -- # es=1 00:16:32.574 18:11:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:32.574 18:11:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:32.574 18:11:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:32.574 18:11:30 -- target/tls.sh@167 -- # killprocess 75965 00:16:32.574 18:11:30 -- common/autotest_common.sh@926 -- # '[' -z 75965 ']' 00:16:32.574 18:11:30 -- common/autotest_common.sh@930 -- # kill -0 75965 00:16:32.574 18:11:30 -- common/autotest_common.sh@931 -- # uname 00:16:32.574 18:11:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:32.574 18:11:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75965 00:16:32.574 18:11:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:32.574 18:11:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:32.574 killing process with pid 75965 00:16:32.574 18:11:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75965' 00:16:32.574 18:11:30 -- common/autotest_common.sh@945 -- # kill 75965 00:16:32.574 18:11:30 -- common/autotest_common.sh@950 -- # wait 75965 00:16:32.833 18:11:30 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:32.833 18:11:30 -- target/tls.sh@49 -- # local key hash crc 00:16:32.833 18:11:30 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:32.833 18:11:30 -- target/tls.sh@51 -- # hash=02 00:16:32.833 18:11:30 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:32.833 18:11:30 -- target/tls.sh@52 -- # gzip -1 -c 00:16:32.833 18:11:30 -- target/tls.sh@52 -- # tail -c8 00:16:32.833 18:11:30 -- target/tls.sh@52 -- # head -c 4 00:16:32.833 18:11:30 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:32.833 18:11:30 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:32.833 18:11:30 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:32.833 18:11:30 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:32.833 18:11:30 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:32.833 18:11:30 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:32.833 18:11:30 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:32.833 18:11:30 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:32.833 18:11:30 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:32.833 18:11:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:32.833 18:11:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:32.833 18:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:32.833 18:11:30 -- nvmf/common.sh@469 -- # nvmfpid=76671 00:16:32.833 18:11:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:32.833 18:11:30 -- nvmf/common.sh@470 -- # waitforlisten 76671 00:16:32.833 18:11:30 -- common/autotest_common.sh@819 -- # '[' -z 76671 ']' 00:16:32.833 18:11:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.833 18:11:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.833 18:11:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.833 18:11:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.833 18:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:33.092 [2024-04-25 18:11:30.804772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:33.092 [2024-04-25 18:11:30.804872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.092 [2024-04-25 18:11:30.937411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.092 [2024-04-25 18:11:31.020461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.092 [2024-04-25 18:11:31.020610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.092 [2024-04-25 18:11:31.020636] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.092 [2024-04-25 18:11:31.020659] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.092 [2024-04-25 18:11:31.020687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.027 18:11:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.027 18:11:31 -- common/autotest_common.sh@852 -- # return 0 00:16:34.027 18:11:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:34.027 18:11:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:34.027 18:11:31 -- common/autotest_common.sh@10 -- # set +x 00:16:34.027 18:11:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.027 18:11:31 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:34.027 18:11:31 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:34.027 18:11:31 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:34.287 [2024-04-25 18:11:31.985906] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.287 18:11:32 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:34.546 18:11:32 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:34.546 [2024-04-25 18:11:32.433964] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.546 [2024-04-25 18:11:32.434177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.546 18:11:32 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:34.804 malloc0 00:16:34.804 18:11:32 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:35.063 18:11:32 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:35.322 18:11:33 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:35.322 18:11:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:35.322 18:11:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:35.322 18:11:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:35.322 18:11:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:35.322 18:11:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.322 18:11:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.322 18:11:33 -- target/tls.sh@28 -- # bdevperf_pid=76769 00:16:35.322 18:11:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.322 18:11:33 -- target/tls.sh@31 -- # waitforlisten 76769 /var/tmp/bdevperf.sock 00:16:35.322 18:11:33 -- common/autotest_common.sh@819 -- # '[' -z 76769 ']' 00:16:35.322 18:11:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.322 18:11:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.322 18:11:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.322 18:11:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:35.322 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.581 [2024-04-25 18:11:33.263807] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:35.581 [2024-04-25 18:11:33.263903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76769 ] 00:16:35.581 [2024-04-25 18:11:33.400140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.581 [2024-04-25 18:11:33.494379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.516 18:11:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:36.516 18:11:34 -- common/autotest_common.sh@852 -- # return 0 00:16:36.516 18:11:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:36.516 [2024-04-25 18:11:34.360746] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:36.516 TLSTESTn1 00:16:36.516 18:11:34 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:36.775 Running I/O for 10 seconds... 00:16:46.773 00:16:46.773 Latency(us) 00:16:46.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.773 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:46.773 Verification LBA range: start 0x0 length 0x2000 00:16:46.773 TLSTESTn1 : 10.01 6564.37 25.64 0.00 0.00 19469.25 4736.47 24069.59 00:16:46.773 =================================================================================================================== 00:16:46.773 Total : 6564.37 25.64 0.00 0.00 19469.25 4736.47 24069.59 00:16:46.773 0 00:16:46.773 18:11:44 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.773 18:11:44 -- target/tls.sh@45 -- # killprocess 76769 00:16:46.773 18:11:44 -- common/autotest_common.sh@926 -- # '[' -z 76769 ']' 00:16:46.773 18:11:44 -- common/autotest_common.sh@930 -- # kill -0 76769 00:16:46.773 18:11:44 -- common/autotest_common.sh@931 -- # uname 00:16:46.773 18:11:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.773 18:11:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76769 00:16:46.773 18:11:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:46.773 18:11:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:46.773 killing process with pid 76769 00:16:46.773 18:11:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76769' 00:16:46.773 18:11:44 -- common/autotest_common.sh@945 -- # kill 76769 00:16:46.773 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.773 00:16:46.773 Latency(us) 00:16:46.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.773 =================================================================================================================== 00:16:46.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.773 18:11:44 -- common/autotest_common.sh@950 -- # wait 76769 00:16:47.031 18:11:44 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:47.031 18:11:44 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:47.032 18:11:44 -- common/autotest_common.sh@640 -- # local es=0 00:16:47.032 18:11:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:47.032 18:11:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:47.032 18:11:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.032 18:11:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:47.032 18:11:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.032 18:11:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:47.032 18:11:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.032 18:11:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:47.032 18:11:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:47.032 18:11:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:47.032 18:11:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.032 18:11:44 -- target/tls.sh@28 -- # bdevperf_pid=76918 00:16:47.032 18:11:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.032 18:11:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.032 18:11:44 -- target/tls.sh@31 -- # waitforlisten 76918 /var/tmp/bdevperf.sock 00:16:47.032 18:11:44 -- common/autotest_common.sh@819 -- # '[' -z 76918 ']' 00:16:47.032 18:11:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.032 18:11:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.032 18:11:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.032 18:11:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.032 18:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:47.032 [2024-04-25 18:11:44.917592] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:47.032 [2024-04-25 18:11:44.917685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76918 ] 00:16:47.290 [2024-04-25 18:11:45.054638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.290 [2024-04-25 18:11:45.141236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.225 18:11:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.225 18:11:45 -- common/autotest_common.sh@852 -- # return 0 00:16:48.225 18:11:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:48.225 [2024-04-25 18:11:46.086618] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.225 [2024-04-25 18:11:46.086723] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:48.225 2024/04/25 18:11:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:48.225 request: 00:16:48.225 { 00:16:48.225 "method": "bdev_nvme_attach_controller", 00:16:48.225 "params": { 00:16:48.225 "name": "TLSTEST", 00:16:48.225 "trtype": "tcp", 00:16:48.225 "traddr": "10.0.0.2", 00:16:48.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.225 "adrfam": "ipv4", 00:16:48.225 "trsvcid": "4420", 00:16:48.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.225 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:48.225 } 00:16:48.225 } 00:16:48.225 Got JSON-RPC error response 00:16:48.225 GoRPCClient: error on JSON-RPC call 00:16:48.225 18:11:46 -- target/tls.sh@36 -- # killprocess 76918 00:16:48.225 18:11:46 -- common/autotest_common.sh@926 -- # '[' -z 76918 ']' 00:16:48.225 18:11:46 -- common/autotest_common.sh@930 -- # kill -0 76918 00:16:48.225 18:11:46 -- common/autotest_common.sh@931 -- # uname 00:16:48.225 18:11:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.225 18:11:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76918 00:16:48.225 18:11:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:48.225 18:11:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:48.225 killing process with pid 76918 00:16:48.225 18:11:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76918' 00:16:48.225 18:11:46 -- common/autotest_common.sh@945 -- # kill 76918 00:16:48.225 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.225 00:16:48.225 Latency(us) 00:16:48.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.225 =================================================================================================================== 00:16:48.225 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.225 18:11:46 -- common/autotest_common.sh@950 -- # wait 76918 00:16:48.484 18:11:46 -- target/tls.sh@37 -- # return 1 00:16:48.484 18:11:46 -- common/autotest_common.sh@643 -- # es=1 00:16:48.484 18:11:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:48.484 18:11:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:48.484 18:11:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:48.484 18:11:46 -- target/tls.sh@183 -- # killprocess 76671 00:16:48.484 18:11:46 -- common/autotest_common.sh@926 -- # '[' -z 76671 ']' 00:16:48.484 18:11:46 -- common/autotest_common.sh@930 -- # kill -0 76671 00:16:48.484 18:11:46 -- common/autotest_common.sh@931 -- # uname 00:16:48.484 18:11:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.484 18:11:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76671 00:16:48.484 18:11:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:48.484 18:11:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:48.484 killing process with pid 76671 00:16:48.484 18:11:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76671' 00:16:48.484 18:11:46 -- common/autotest_common.sh@945 -- # kill 76671 00:16:48.484 18:11:46 -- common/autotest_common.sh@950 -- # wait 76671 00:16:48.742 18:11:46 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:48.742 18:11:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.743 18:11:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:48.743 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:16:48.743 18:11:46 -- nvmf/common.sh@469 -- # nvmfpid=76969 00:16:48.743 18:11:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.743 18:11:46 -- nvmf/common.sh@470 -- # waitforlisten 76969 00:16:48.743 18:11:46 -- common/autotest_common.sh@819 -- # '[' -z 76969 ']' 00:16:48.743 18:11:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.743 18:11:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.743 18:11:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.743 18:11:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.743 18:11:46 -- common/autotest_common.sh@10 -- # set +x 00:16:48.743 [2024-04-25 18:11:46.657080] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:48.743 [2024-04-25 18:11:46.657198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.001 [2024-04-25 18:11:46.785782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.001 [2024-04-25 18:11:46.863968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.001 [2024-04-25 18:11:46.864118] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.001 [2024-04-25 18:11:46.864131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.001 [2024-04-25 18:11:46.864138] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.001 [2024-04-25 18:11:46.864166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.936 18:11:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.937 18:11:47 -- common/autotest_common.sh@852 -- # return 0 00:16:49.937 18:11:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.937 18:11:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.937 18:11:47 -- common/autotest_common.sh@10 -- # set +x 00:16:49.937 18:11:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.937 18:11:47 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.937 18:11:47 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.937 18:11:47 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.937 18:11:47 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:16:49.937 18:11:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.937 18:11:47 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:16:49.937 18:11:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.937 18:11:47 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.937 18:11:47 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.937 18:11:47 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.937 [2024-04-25 18:11:47.865878] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.195 18:11:47 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:50.453 18:11:48 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:50.453 [2024-04-25 18:11:48.305925] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:50.453 [2024-04-25 18:11:48.306128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.453 18:11:48 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:50.712 malloc0 00:16:50.712 18:11:48 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:50.971 18:11:48 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.230 [2024-04-25 18:11:48.952997] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:51.230 [2024-04-25 18:11:48.953047] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:51.230 [2024-04-25 18:11:48.953080] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:16:51.230 2024/04/25 18:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:51.230 request: 00:16:51.230 { 00:16:51.230 "method": "nvmf_subsystem_add_host", 00:16:51.230 "params": { 00:16:51.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.230 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.230 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:51.230 } 00:16:51.230 } 00:16:51.230 Got JSON-RPC error response 00:16:51.230 GoRPCClient: error on JSON-RPC call 00:16:51.230 18:11:48 -- common/autotest_common.sh@643 -- # es=1 00:16:51.230 18:11:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:51.230 18:11:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:51.230 18:11:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:51.230 18:11:48 -- target/tls.sh@189 -- # killprocess 76969 00:16:51.230 18:11:48 -- common/autotest_common.sh@926 -- # '[' -z 76969 ']' 00:16:51.230 18:11:48 -- common/autotest_common.sh@930 -- # kill -0 76969 00:16:51.230 18:11:48 -- common/autotest_common.sh@931 -- # uname 00:16:51.230 18:11:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.230 18:11:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76969 00:16:51.230 18:11:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:51.230 18:11:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:51.230 killing process with pid 76969 00:16:51.230 18:11:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76969' 00:16:51.230 18:11:49 -- common/autotest_common.sh@945 -- # kill 76969 00:16:51.230 18:11:49 -- common/autotest_common.sh@950 -- # wait 76969 00:16:51.488 18:11:49 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.488 18:11:49 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:16:51.488 18:11:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.488 18:11:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.488 18:11:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.488 18:11:49 -- nvmf/common.sh@469 -- # nvmfpid=77080 00:16:51.488 18:11:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:51.488 18:11:49 -- nvmf/common.sh@470 -- # waitforlisten 77080 00:16:51.488 18:11:49 -- common/autotest_common.sh@819 -- # '[' -z 77080 ']' 00:16:51.488 18:11:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.488 18:11:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.488 18:11:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.488 18:11:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.488 18:11:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.488 [2024-04-25 18:11:49.287618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:51.488 [2024-04-25 18:11:49.287729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.747 [2024-04-25 18:11:49.422787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.747 [2024-04-25 18:11:49.496261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.747 [2024-04-25 18:11:49.496450] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.747 [2024-04-25 18:11:49.496463] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.747 [2024-04-25 18:11:49.496471] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.747 [2024-04-25 18:11:49.496512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.314 18:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.314 18:11:50 -- common/autotest_common.sh@852 -- # return 0 00:16:52.314 18:11:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.314 18:11:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.314 18:11:50 -- common/autotest_common.sh@10 -- # set +x 00:16:52.573 18:11:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.573 18:11:50 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:52.573 18:11:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:52.573 18:11:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.573 [2024-04-25 18:11:50.503801] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.832 18:11:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.091 18:11:50 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.091 [2024-04-25 18:11:50.955798] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.091 [2024-04-25 18:11:50.956008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.091 18:11:50 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.349 malloc0 00:16:53.349 18:11:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.608 18:11:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:53.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.869 18:11:51 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.869 18:11:51 -- target/tls.sh@197 -- # bdevperf_pid=77181 00:16:53.869 18:11:51 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.869 18:11:51 -- target/tls.sh@200 -- # waitforlisten 77181 /var/tmp/bdevperf.sock 00:16:53.869 18:11:51 -- common/autotest_common.sh@819 -- # '[' -z 77181 ']' 00:16:53.869 18:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.869 18:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.869 18:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.869 18:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.869 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:16:53.869 [2024-04-25 18:11:51.657794] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:53.869 [2024-04-25 18:11:51.657887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77181 ] 00:16:53.869 [2024-04-25 18:11:51.793555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.129 [2024-04-25 18:11:51.892307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.697 18:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.697 18:11:52 -- common/autotest_common.sh@852 -- # return 0 00:16:54.697 18:11:52 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:54.956 [2024-04-25 18:11:52.693796] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.956 TLSTESTn1 00:16:54.956 18:11:52 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:55.214 18:11:53 -- target/tls.sh@205 -- # tgtconf='{ 00:16:55.214 "subsystems": [ 00:16:55.214 { 00:16:55.214 "subsystem": "iobuf", 00:16:55.214 "config": [ 00:16:55.214 { 00:16:55.214 "method": "iobuf_set_options", 00:16:55.214 "params": { 00:16:55.214 "large_bufsize": 135168, 00:16:55.214 "large_pool_count": 1024, 00:16:55.214 "small_bufsize": 8192, 00:16:55.214 "small_pool_count": 8192 00:16:55.214 } 00:16:55.214 } 00:16:55.214 ] 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "subsystem": "sock", 00:16:55.214 "config": [ 00:16:55.214 { 00:16:55.214 "method": "sock_impl_set_options", 00:16:55.214 "params": { 00:16:55.214 "enable_ktls": false, 00:16:55.214 "enable_placement_id": 0, 00:16:55.214 "enable_quickack": false, 00:16:55.214 "enable_recv_pipe": true, 00:16:55.214 "enable_zerocopy_send_client": false, 00:16:55.214 "enable_zerocopy_send_server": true, 00:16:55.214 "impl_name": "posix", 00:16:55.214 "recv_buf_size": 2097152, 00:16:55.214 "send_buf_size": 2097152, 00:16:55.214 "tls_version": 0, 00:16:55.214 "zerocopy_threshold": 0 00:16:55.214 } 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "method": "sock_impl_set_options", 00:16:55.214 "params": { 00:16:55.214 "enable_ktls": false, 00:16:55.214 "enable_placement_id": 0, 00:16:55.214 "enable_quickack": false, 00:16:55.214 "enable_recv_pipe": true, 00:16:55.214 "enable_zerocopy_send_client": false, 00:16:55.214 "enable_zerocopy_send_server": true, 00:16:55.214 "impl_name": "ssl", 00:16:55.214 "recv_buf_size": 4096, 00:16:55.214 "send_buf_size": 4096, 00:16:55.214 "tls_version": 0, 00:16:55.214 "zerocopy_threshold": 0 00:16:55.214 } 00:16:55.214 } 00:16:55.214 ] 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "subsystem": "vmd", 00:16:55.214 "config": [] 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "subsystem": "accel", 00:16:55.214 "config": [ 00:16:55.214 { 00:16:55.214 "method": "accel_set_options", 00:16:55.214 "params": { 00:16:55.214 "buf_count": 2048, 00:16:55.214 "large_cache_size": 16, 00:16:55.214 "sequence_count": 2048, 00:16:55.214 "small_cache_size": 128, 00:16:55.214 "task_count": 2048 00:16:55.214 } 00:16:55.214 } 00:16:55.214 ] 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "subsystem": "bdev", 00:16:55.214 "config": [ 00:16:55.214 { 00:16:55.214 "method": "bdev_set_options", 00:16:55.214 "params": { 00:16:55.214 "bdev_auto_examine": true, 00:16:55.214 "bdev_io_cache_size": 256, 00:16:55.214 "bdev_io_pool_size": 65535, 00:16:55.214 "iobuf_large_cache_size": 16, 00:16:55.214 "iobuf_small_cache_size": 128 00:16:55.214 } 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "method": "bdev_raid_set_options", 00:16:55.214 "params": { 00:16:55.214 "process_window_size_kb": 1024 00:16:55.214 } 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "method": "bdev_iscsi_set_options", 00:16:55.214 "params": { 00:16:55.214 "timeout_sec": 30 00:16:55.214 } 00:16:55.214 }, 00:16:55.214 { 00:16:55.214 "method": "bdev_nvme_set_options", 00:16:55.214 "params": { 00:16:55.214 "action_on_timeout": "none", 00:16:55.214 "allow_accel_sequence": false, 00:16:55.214 "arbitration_burst": 0, 00:16:55.214 "bdev_retry_count": 3, 00:16:55.214 "ctrlr_loss_timeout_sec": 0, 00:16:55.214 "delay_cmd_submit": true, 00:16:55.214 "fast_io_fail_timeout_sec": 0, 00:16:55.214 "generate_uuids": false, 00:16:55.214 "high_priority_weight": 0, 00:16:55.215 "io_path_stat": false, 00:16:55.215 "io_queue_requests": 0, 00:16:55.215 "keep_alive_timeout_ms": 10000, 00:16:55.215 "low_priority_weight": 0, 00:16:55.215 "medium_priority_weight": 0, 00:16:55.215 "nvme_adminq_poll_period_us": 10000, 00:16:55.215 "nvme_ioq_poll_period_us": 0, 00:16:55.215 "reconnect_delay_sec": 0, 00:16:55.215 "timeout_admin_us": 0, 00:16:55.215 "timeout_us": 0, 00:16:55.215 "transport_ack_timeout": 0, 00:16:55.215 "transport_retry_count": 4, 00:16:55.215 "transport_tos": 0 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "bdev_nvme_set_hotplug", 00:16:55.215 "params": { 00:16:55.215 "enable": false, 00:16:55.215 "period_us": 100000 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "bdev_malloc_create", 00:16:55.215 "params": { 00:16:55.215 "block_size": 4096, 00:16:55.215 "name": "malloc0", 00:16:55.215 "num_blocks": 8192, 00:16:55.215 "optimal_io_boundary": 0, 00:16:55.215 "physical_block_size": 4096, 00:16:55.215 "uuid": "40670652-a128-4b6c-89cb-12e548e4a9e7" 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "bdev_wait_for_examine" 00:16:55.215 } 00:16:55.215 ] 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "subsystem": "nbd", 00:16:55.215 "config": [] 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "subsystem": "scheduler", 00:16:55.215 "config": [ 00:16:55.215 { 00:16:55.215 "method": "framework_set_scheduler", 00:16:55.215 "params": { 00:16:55.215 "name": "static" 00:16:55.215 } 00:16:55.215 } 00:16:55.215 ] 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "subsystem": "nvmf", 00:16:55.215 "config": [ 00:16:55.215 { 00:16:55.215 "method": "nvmf_set_config", 00:16:55.215 "params": { 00:16:55.215 "admin_cmd_passthru": { 00:16:55.215 "identify_ctrlr": false 00:16:55.215 }, 00:16:55.215 "discovery_filter": "match_any" 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_set_max_subsystems", 00:16:55.215 "params": { 00:16:55.215 "max_subsystems": 1024 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_set_crdt", 00:16:55.215 "params": { 00:16:55.215 "crdt1": 0, 00:16:55.215 "crdt2": 0, 00:16:55.215 "crdt3": 0 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_create_transport", 00:16:55.215 "params": { 00:16:55.215 "abort_timeout_sec": 1, 00:16:55.215 "buf_cache_size": 4294967295, 00:16:55.215 "c2h_success": false, 00:16:55.215 "dif_insert_or_strip": false, 00:16:55.215 "in_capsule_data_size": 4096, 00:16:55.215 "io_unit_size": 131072, 00:16:55.215 "max_aq_depth": 128, 00:16:55.215 "max_io_qpairs_per_ctrlr": 127, 00:16:55.215 "max_io_size": 131072, 00:16:55.215 "max_queue_depth": 128, 00:16:55.215 "num_shared_buffers": 511, 00:16:55.215 "sock_priority": 0, 00:16:55.215 "trtype": "TCP", 00:16:55.215 "zcopy": false 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_create_subsystem", 00:16:55.215 "params": { 00:16:55.215 "allow_any_host": false, 00:16:55.215 "ana_reporting": false, 00:16:55.215 "max_cntlid": 65519, 00:16:55.215 "max_namespaces": 10, 00:16:55.215 "min_cntlid": 1, 00:16:55.215 "model_number": "SPDK bdev Controller", 00:16:55.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.215 "serial_number": "SPDK00000000000001" 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_subsystem_add_host", 00:16:55.215 "params": { 00:16:55.215 "host": "nqn.2016-06.io.spdk:host1", 00:16:55.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.215 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_subsystem_add_ns", 00:16:55.215 "params": { 00:16:55.215 "namespace": { 00:16:55.215 "bdev_name": "malloc0", 00:16:55.215 "nguid": "40670652A1284B6C89CB12E548E4A9E7", 00:16:55.215 "nsid": 1, 00:16:55.215 "uuid": "40670652-a128-4b6c-89cb-12e548e4a9e7" 00:16:55.215 }, 00:16:55.215 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:55.215 } 00:16:55.215 }, 00:16:55.215 { 00:16:55.215 "method": "nvmf_subsystem_add_listener", 00:16:55.215 "params": { 00:16:55.215 "listen_address": { 00:16:55.215 "adrfam": "IPv4", 00:16:55.215 "traddr": "10.0.0.2", 00:16:55.215 "trsvcid": "4420", 00:16:55.215 "trtype": "TCP" 00:16:55.215 }, 00:16:55.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.215 "secure_channel": true 00:16:55.215 } 00:16:55.215 } 00:16:55.215 ] 00:16:55.215 } 00:16:55.215 ] 00:16:55.215 }' 00:16:55.215 18:11:53 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:55.473 18:11:53 -- target/tls.sh@206 -- # bdevperfconf='{ 00:16:55.473 "subsystems": [ 00:16:55.473 { 00:16:55.473 "subsystem": "iobuf", 00:16:55.473 "config": [ 00:16:55.473 { 00:16:55.473 "method": "iobuf_set_options", 00:16:55.473 "params": { 00:16:55.473 "large_bufsize": 135168, 00:16:55.473 "large_pool_count": 1024, 00:16:55.473 "small_bufsize": 8192, 00:16:55.473 "small_pool_count": 8192 00:16:55.473 } 00:16:55.473 } 00:16:55.473 ] 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "subsystem": "sock", 00:16:55.473 "config": [ 00:16:55.473 { 00:16:55.473 "method": "sock_impl_set_options", 00:16:55.473 "params": { 00:16:55.473 "enable_ktls": false, 00:16:55.473 "enable_placement_id": 0, 00:16:55.473 "enable_quickack": false, 00:16:55.473 "enable_recv_pipe": true, 00:16:55.473 "enable_zerocopy_send_client": false, 00:16:55.473 "enable_zerocopy_send_server": true, 00:16:55.473 "impl_name": "posix", 00:16:55.473 "recv_buf_size": 2097152, 00:16:55.473 "send_buf_size": 2097152, 00:16:55.473 "tls_version": 0, 00:16:55.473 "zerocopy_threshold": 0 00:16:55.473 } 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "method": "sock_impl_set_options", 00:16:55.473 "params": { 00:16:55.473 "enable_ktls": false, 00:16:55.473 "enable_placement_id": 0, 00:16:55.473 "enable_quickack": false, 00:16:55.473 "enable_recv_pipe": true, 00:16:55.473 "enable_zerocopy_send_client": false, 00:16:55.473 "enable_zerocopy_send_server": true, 00:16:55.473 "impl_name": "ssl", 00:16:55.473 "recv_buf_size": 4096, 00:16:55.473 "send_buf_size": 4096, 00:16:55.473 "tls_version": 0, 00:16:55.473 "zerocopy_threshold": 0 00:16:55.473 } 00:16:55.473 } 00:16:55.473 ] 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "subsystem": "vmd", 00:16:55.473 "config": [] 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "subsystem": "accel", 00:16:55.473 "config": [ 00:16:55.473 { 00:16:55.473 "method": "accel_set_options", 00:16:55.473 "params": { 00:16:55.473 "buf_count": 2048, 00:16:55.473 "large_cache_size": 16, 00:16:55.473 "sequence_count": 2048, 00:16:55.473 "small_cache_size": 128, 00:16:55.473 "task_count": 2048 00:16:55.473 } 00:16:55.473 } 00:16:55.473 ] 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "subsystem": "bdev", 00:16:55.473 "config": [ 00:16:55.473 { 00:16:55.473 "method": "bdev_set_options", 00:16:55.473 "params": { 00:16:55.473 "bdev_auto_examine": true, 00:16:55.473 "bdev_io_cache_size": 256, 00:16:55.473 "bdev_io_pool_size": 65535, 00:16:55.473 "iobuf_large_cache_size": 16, 00:16:55.473 "iobuf_small_cache_size": 128 00:16:55.473 } 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "method": "bdev_raid_set_options", 00:16:55.473 "params": { 00:16:55.473 "process_window_size_kb": 1024 00:16:55.473 } 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "method": "bdev_iscsi_set_options", 00:16:55.473 "params": { 00:16:55.473 "timeout_sec": 30 00:16:55.473 } 00:16:55.473 }, 00:16:55.473 { 00:16:55.473 "method": "bdev_nvme_set_options", 00:16:55.473 "params": { 00:16:55.473 "action_on_timeout": "none", 00:16:55.473 "allow_accel_sequence": false, 00:16:55.474 "arbitration_burst": 0, 00:16:55.474 "bdev_retry_count": 3, 00:16:55.474 "ctrlr_loss_timeout_sec": 0, 00:16:55.474 "delay_cmd_submit": true, 00:16:55.474 "fast_io_fail_timeout_sec": 0, 00:16:55.474 "generate_uuids": false, 00:16:55.474 "high_priority_weight": 0, 00:16:55.474 "io_path_stat": false, 00:16:55.474 "io_queue_requests": 512, 00:16:55.474 "keep_alive_timeout_ms": 10000, 00:16:55.474 "low_priority_weight": 0, 00:16:55.474 "medium_priority_weight": 0, 00:16:55.474 "nvme_adminq_poll_period_us": 10000, 00:16:55.474 "nvme_ioq_poll_period_us": 0, 00:16:55.474 "reconnect_delay_sec": 0, 00:16:55.474 "timeout_admin_us": 0, 00:16:55.474 "timeout_us": 0, 00:16:55.474 "transport_ack_timeout": 0, 00:16:55.474 "transport_retry_count": 4, 00:16:55.474 "transport_tos": 0 00:16:55.474 } 00:16:55.474 }, 00:16:55.474 { 00:16:55.474 "method": "bdev_nvme_attach_controller", 00:16:55.474 "params": { 00:16:55.474 "adrfam": "IPv4", 00:16:55.474 "ctrlr_loss_timeout_sec": 0, 00:16:55.474 "ddgst": false, 00:16:55.474 "fast_io_fail_timeout_sec": 0, 00:16:55.474 "hdgst": false, 00:16:55.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.474 "name": "TLSTEST", 00:16:55.474 "prchk_guard": false, 00:16:55.474 "prchk_reftag": false, 00:16:55.474 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:55.474 "reconnect_delay_sec": 0, 00:16:55.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.474 "traddr": "10.0.0.2", 00:16:55.474 "trsvcid": "4420", 00:16:55.474 "trtype": "TCP" 00:16:55.474 } 00:16:55.474 }, 00:16:55.474 { 00:16:55.474 "method": "bdev_nvme_set_hotplug", 00:16:55.474 "params": { 00:16:55.474 "enable": false, 00:16:55.474 "period_us": 100000 00:16:55.474 } 00:16:55.474 }, 00:16:55.474 { 00:16:55.474 "method": "bdev_wait_for_examine" 00:16:55.474 } 00:16:55.474 ] 00:16:55.474 }, 00:16:55.474 { 00:16:55.474 "subsystem": "nbd", 00:16:55.474 "config": [] 00:16:55.474 } 00:16:55.474 ] 00:16:55.474 }' 00:16:55.474 18:11:53 -- target/tls.sh@208 -- # killprocess 77181 00:16:55.474 18:11:53 -- common/autotest_common.sh@926 -- # '[' -z 77181 ']' 00:16:55.474 18:11:53 -- common/autotest_common.sh@930 -- # kill -0 77181 00:16:55.474 18:11:53 -- common/autotest_common.sh@931 -- # uname 00:16:55.474 18:11:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.474 18:11:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77181 00:16:55.474 18:11:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:55.474 18:11:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:55.474 killing process with pid 77181 00:16:55.474 18:11:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77181' 00:16:55.474 18:11:53 -- common/autotest_common.sh@945 -- # kill 77181 00:16:55.474 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.474 00:16:55.474 Latency(us) 00:16:55.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.474 =================================================================================================================== 00:16:55.474 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.474 18:11:53 -- common/autotest_common.sh@950 -- # wait 77181 00:16:55.734 18:11:53 -- target/tls.sh@209 -- # killprocess 77080 00:16:55.734 18:11:53 -- common/autotest_common.sh@926 -- # '[' -z 77080 ']' 00:16:55.734 18:11:53 -- common/autotest_common.sh@930 -- # kill -0 77080 00:16:55.734 18:11:53 -- common/autotest_common.sh@931 -- # uname 00:16:55.734 18:11:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.734 18:11:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77080 00:16:55.734 18:11:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:55.734 18:11:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:55.734 killing process with pid 77080 00:16:55.734 18:11:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77080' 00:16:55.734 18:11:53 -- common/autotest_common.sh@945 -- # kill 77080 00:16:55.734 18:11:53 -- common/autotest_common.sh@950 -- # wait 77080 00:16:56.068 18:11:53 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:56.068 18:11:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.068 18:11:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:56.068 18:11:53 -- target/tls.sh@212 -- # echo '{ 00:16:56.068 "subsystems": [ 00:16:56.068 { 00:16:56.068 "subsystem": "iobuf", 00:16:56.068 "config": [ 00:16:56.068 { 00:16:56.068 "method": "iobuf_set_options", 00:16:56.068 "params": { 00:16:56.068 "large_bufsize": 135168, 00:16:56.068 "large_pool_count": 1024, 00:16:56.068 "small_bufsize": 8192, 00:16:56.068 "small_pool_count": 8192 00:16:56.068 } 00:16:56.068 } 00:16:56.068 ] 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "subsystem": "sock", 00:16:56.068 "config": [ 00:16:56.068 { 00:16:56.068 "method": "sock_impl_set_options", 00:16:56.068 "params": { 00:16:56.068 "enable_ktls": false, 00:16:56.068 "enable_placement_id": 0, 00:16:56.068 "enable_quickack": false, 00:16:56.068 "enable_recv_pipe": true, 00:16:56.068 "enable_zerocopy_send_client": false, 00:16:56.068 "enable_zerocopy_send_server": true, 00:16:56.068 "impl_name": "posix", 00:16:56.068 "recv_buf_size": 2097152, 00:16:56.068 "send_buf_size": 2097152, 00:16:56.068 "tls_version": 0, 00:16:56.068 "zerocopy_threshold": 0 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "sock_impl_set_options", 00:16:56.068 "params": { 00:16:56.068 "enable_ktls": false, 00:16:56.068 "enable_placement_id": 0, 00:16:56.068 "enable_quickack": false, 00:16:56.068 "enable_recv_pipe": true, 00:16:56.068 "enable_zerocopy_send_client": false, 00:16:56.068 "enable_zerocopy_send_server": true, 00:16:56.068 "impl_name": "ssl", 00:16:56.068 "recv_buf_size": 4096, 00:16:56.068 "send_buf_size": 4096, 00:16:56.068 "tls_version": 0, 00:16:56.068 "zerocopy_threshold": 0 00:16:56.068 } 00:16:56.068 } 00:16:56.068 ] 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "subsystem": "vmd", 00:16:56.068 "config": [] 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "subsystem": "accel", 00:16:56.068 "config": [ 00:16:56.068 { 00:16:56.068 "method": "accel_set_options", 00:16:56.068 "params": { 00:16:56.068 "buf_count": 2048, 00:16:56.068 "large_cache_size": 16, 00:16:56.068 "sequence_count": 2048, 00:16:56.068 "small_cache_size": 128, 00:16:56.068 "task_count": 2048 00:16:56.068 } 00:16:56.068 } 00:16:56.068 ] 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "subsystem": "bdev", 00:16:56.068 "config": [ 00:16:56.068 { 00:16:56.068 "method": "bdev_set_options", 00:16:56.068 "params": { 00:16:56.068 "bdev_auto_examine": true, 00:16:56.068 "bdev_io_cache_size": 256, 00:16:56.068 "bdev_io_pool_size": 65535, 00:16:56.068 "iobuf_large_cache_size": 16, 00:16:56.068 "iobuf_small_cache_size": 128 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_raid_set_options", 00:16:56.068 "params": { 00:16:56.068 "process_window_size_kb": 1024 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_iscsi_set_options", 00:16:56.068 "params": { 00:16:56.068 "timeout_sec": 30 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_nvme_set_options", 00:16:56.068 "params": { 00:16:56.068 "action_on_timeout": "none", 00:16:56.068 "allow_accel_sequence": false, 00:16:56.068 "arbitration_burst": 0, 00:16:56.068 "bdev_retry_count": 3, 00:16:56.068 "ctrlr_loss_timeout_sec": 0, 00:16:56.068 "delay_cmd_submit": true, 00:16:56.068 "fast_io_fail_timeout_sec": 0, 00:16:56.068 "generate_uuids": false, 00:16:56.068 "high_priority_weight": 0, 00:16:56.068 "io_path_stat": false, 00:16:56.068 "io_queue_requests": 0, 00:16:56.068 "keep_alive_timeout_ms": 10000, 00:16:56.068 "low_priority_weight": 0, 00:16:56.068 "medium_priority_weight": 0, 00:16:56.068 "nvme_adminq_poll_period_us": 10000, 00:16:56.068 "nvme_ioq_poll_period_us": 0, 00:16:56.068 "reconnect_delay_sec": 0, 00:16:56.068 "timeout_admin_us": 0, 00:16:56.068 "timeout_us": 0, 00:16:56.068 "transport_ack_timeout": 0, 00:16:56.068 "transport_retry_count": 4, 00:16:56.068 "transport_tos": 0 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_nvme_set_hotplug", 00:16:56.068 "params": { 00:16:56.068 "enable": false, 00:16:56.068 "period_us": 100000 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_malloc_create", 00:16:56.068 "params": { 00:16:56.068 "block_size": 4096, 00:16:56.068 "name": "malloc0", 00:16:56.068 "num_blocks": 8192, 00:16:56.068 "optimal_io_boundary": 0, 00:16:56.068 "physical_block_size": 4096, 00:16:56.068 "uuid": "40670652-a128-4b6c-89cb-12e548e4a9e7" 00:16:56.068 } 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "method": "bdev_wait_for_examine" 00:16:56.068 } 00:16:56.068 ] 00:16:56.068 }, 00:16:56.068 { 00:16:56.068 "subsystem": "nbd", 00:16:56.068 "config": [] 00:16:56.068 }, 00:16:56.069 { 00:16:56.069 "subsystem": "scheduler", 00:16:56.069 "config": [ 00:16:56.069 { 00:16:56.069 "method": "framework_set_scheduler", 00:16:56.069 "params": { 00:16:56.069 "name": "static" 00:16:56.069 } 00:16:56.069 } 00:16:56.069 ] 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "subsystem": "nvmf", 00:16:56.069 "config": [ 00:16:56.069 { 00:16:56.069 "method": "nvmf_set_config", 00:16:56.069 "params": { 00:16:56.069 "admin_cmd_passthru": { 00:16:56.069 "identify_ctrlr": false 00:16:56.069 }, 00:16:56.069 "discovery_filter": "match_any" 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_set_max_subsystems", 00:16:56.069 "params": { 00:16:56.069 "max_subsystems": 1024 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_set_crdt", 00:16:56.069 "params": { 00:16:56.069 "crdt1": 0, 00:16:56.069 "crdt2": 0, 00:16:56.069 "crdt3": 0 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_create_transport", 00:16:56.069 "params": { 00:16:56.069 "abort_timeout_sec": 1, 00:16:56.069 "buf_cache_size": 4294967295, 00:16:56.069 "c2h_success": false, 00:16:56.069 "dif_insert_or_strip": false, 00:16:56.069 "in_capsule_data_size": 4096, 00:16:56.069 "io_unit_size": 131072, 00:16:56.069 "max_aq_depth": 128, 00:16:56.069 "max_io_qpairs_per_ctrlr": 127, 00:16:56.069 "max_io_size": 131072, 00:16:56.069 "max_queue_depth": 128, 00:16:56.069 "num_shared_buffers": 511, 00:16:56.069 "sock_priority": 0, 00:16:56.069 "trtype": "TCP", 00:16:56.069 "zcopy": false 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_create_subsystem", 00:16:56.069 "params": { 00:16:56.069 "allow_any_host": false, 00:16:56.069 "ana_reporting": false, 00:16:56.069 "max_cntlid": 65519, 00:16:56.069 "max_namespaces": 10, 00:16:56.069 "min_cntlid": 1, 00:16:56.069 "model_number": "SPDK bdev Controller", 00:16:56.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.069 "serial_number": "SPDK00000000000001" 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_subsystem_add_host", 00:16:56.069 "params": { 00:16:56.069 "host": "nqn.2016-06.io.spdk:host1", 00:16:56.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.069 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_subsystem_add_ns", 00:16:56.069 "params": { 00:16:56.069 "namespace": { 00:16:56.069 "bdev_name": "malloc0", 00:16:56.069 "nguid": "40670652A1284B6C89CB12E548E4A9E7", 00:16:56.069 "nsid": 1, 00:16:56.069 "uuid": "40670652-a128-4b6c-89cb-12e548e4a9e7" 00:16:56.069 }, 00:16:56.069 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.069 } 00:16:56.069 }, 00:16:56.069 { 00:16:56.069 "method": "nvmf_subsystem_add_listener", 00:16:56.069 "params": { 00:16:56.069 "listen_address": { 00:16:56.069 "adrfam": "IPv4", 00:16:56.069 "traddr": "10.0.0.2", 00:16:56.069 "trsvcid": "4420", 00:16:56.069 "trtype": "TCP" 00:16:56.069 }, 00:16:56.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.069 "secure_channel": true 00:16:56.069 } 00:16:56.069 } 00:16:56.069 ] 00:16:56.069 } 00:16:56.069 ] 00:16:56.069 }' 00:16:56.069 18:11:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.069 18:11:53 -- nvmf/common.sh@469 -- # nvmfpid=77250 00:16:56.069 18:11:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:56.069 18:11:53 -- nvmf/common.sh@470 -- # waitforlisten 77250 00:16:56.069 18:11:53 -- common/autotest_common.sh@819 -- # '[' -z 77250 ']' 00:16:56.069 18:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.069 18:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.069 18:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.069 18:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.069 18:11:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.069 [2024-04-25 18:11:53.925068] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:56.069 [2024-04-25 18:11:53.925187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.327 [2024-04-25 18:11:54.064709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.327 [2024-04-25 18:11:54.147902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:56.327 [2024-04-25 18:11:54.148034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.327 [2024-04-25 18:11:54.148044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.327 [2024-04-25 18:11:54.148052] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.327 [2024-04-25 18:11:54.148079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.585 [2024-04-25 18:11:54.359426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.585 [2024-04-25 18:11:54.391390] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.585 [2024-04-25 18:11:54.391594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.151 18:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.151 18:11:54 -- common/autotest_common.sh@852 -- # return 0 00:16:57.151 18:11:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.151 18:11:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:57.151 18:11:54 -- common/autotest_common.sh@10 -- # set +x 00:16:57.151 18:11:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.151 18:11:54 -- target/tls.sh@216 -- # bdevperf_pid=77294 00:16:57.151 18:11:54 -- target/tls.sh@217 -- # waitforlisten 77294 /var/tmp/bdevperf.sock 00:16:57.151 18:11:54 -- common/autotest_common.sh@819 -- # '[' -z 77294 ']' 00:16:57.151 18:11:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.151 18:11:54 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:57.151 18:11:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:57.151 18:11:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.151 18:11:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:57.151 18:11:54 -- target/tls.sh@213 -- # echo '{ 00:16:57.151 "subsystems": [ 00:16:57.151 { 00:16:57.151 "subsystem": "iobuf", 00:16:57.151 "config": [ 00:16:57.151 { 00:16:57.151 "method": "iobuf_set_options", 00:16:57.151 "params": { 00:16:57.151 "large_bufsize": 135168, 00:16:57.151 "large_pool_count": 1024, 00:16:57.151 "small_bufsize": 8192, 00:16:57.151 "small_pool_count": 8192 00:16:57.151 } 00:16:57.151 } 00:16:57.151 ] 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "subsystem": "sock", 00:16:57.151 "config": [ 00:16:57.151 { 00:16:57.151 "method": "sock_impl_set_options", 00:16:57.151 "params": { 00:16:57.151 "enable_ktls": false, 00:16:57.151 "enable_placement_id": 0, 00:16:57.151 "enable_quickack": false, 00:16:57.151 "enable_recv_pipe": true, 00:16:57.151 "enable_zerocopy_send_client": false, 00:16:57.151 "enable_zerocopy_send_server": true, 00:16:57.151 "impl_name": "posix", 00:16:57.151 "recv_buf_size": 2097152, 00:16:57.151 "send_buf_size": 2097152, 00:16:57.151 "tls_version": 0, 00:16:57.151 "zerocopy_threshold": 0 00:16:57.151 } 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "method": "sock_impl_set_options", 00:16:57.151 "params": { 00:16:57.151 "enable_ktls": false, 00:16:57.151 "enable_placement_id": 0, 00:16:57.151 "enable_quickack": false, 00:16:57.151 "enable_recv_pipe": true, 00:16:57.151 "enable_zerocopy_send_client": false, 00:16:57.151 "enable_zerocopy_send_server": true, 00:16:57.151 "impl_name": "ssl", 00:16:57.151 "recv_buf_size": 4096, 00:16:57.151 "send_buf_size": 4096, 00:16:57.151 "tls_version": 0, 00:16:57.151 "zerocopy_threshold": 0 00:16:57.151 } 00:16:57.151 } 00:16:57.151 ] 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "subsystem": "vmd", 00:16:57.151 "config": [] 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "subsystem": "accel", 00:16:57.151 "config": [ 00:16:57.151 { 00:16:57.151 "method": "accel_set_options", 00:16:57.151 "params": { 00:16:57.151 "buf_count": 2048, 00:16:57.151 "large_cache_size": 16, 00:16:57.151 "sequence_count": 2048, 00:16:57.151 "small_cache_size": 128, 00:16:57.151 "task_count": 2048 00:16:57.151 } 00:16:57.151 } 00:16:57.151 ] 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "subsystem": "bdev", 00:16:57.151 "config": [ 00:16:57.151 { 00:16:57.151 "method": "bdev_set_options", 00:16:57.151 "params": { 00:16:57.151 "bdev_auto_examine": true, 00:16:57.151 "bdev_io_cache_size": 256, 00:16:57.151 "bdev_io_pool_size": 65535, 00:16:57.151 "iobuf_large_cache_size": 16, 00:16:57.151 "iobuf_small_cache_size": 128 00:16:57.151 } 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "method": "bdev_raid_set_options", 00:16:57.151 "params": { 00:16:57.151 "process_window_size_kb": 1024 00:16:57.151 } 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "method": "bdev_iscsi_set_options", 00:16:57.151 "params": { 00:16:57.151 "timeout_sec": 30 00:16:57.151 } 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "method": "bdev_nvme_set_options", 00:16:57.151 "params": { 00:16:57.151 "action_on_timeout": "none", 00:16:57.151 "allow_accel_sequence": false, 00:16:57.151 "arbitration_burst": 0, 00:16:57.151 "bdev_retry_count": 3, 00:16:57.151 "ctrlr_loss_timeout_sec": 0, 00:16:57.151 "delay_cmd_submit": true, 00:16:57.151 "fast_io_fail_timeout_sec": 0, 00:16:57.151 "generate_uuids": false, 00:16:57.151 "high_priority_weight": 0, 00:16:57.151 "io_path_stat": false, 00:16:57.151 "io_queue_requests": 512, 00:16:57.151 "keep_alive_timeout_ms": 10000, 00:16:57.151 "low_priority_weight": 0, 00:16:57.151 "medium_priority_weight": 0, 00:16:57.151 "nvme_adminq_poll_period_us": 10000, 00:16:57.151 "nvme_ioq_poll_period_us": 0, 00:16:57.151 "reconnect_delay_sec": 0, 00:16:57.151 "timeout_admin_us": 0, 00:16:57.151 "timeout_us": 0, 00:16:57.151 "transport_ack_timeout": 0, 00:16:57.151 "transport_retry_count": 4, 00:16:57.151 "transport_tos": 0 00:16:57.152 } 00:16:57.152 }, 00:16:57.152 { 00:16:57.152 "method": "bdev_nvme_attach_controller", 00:16:57.152 "params": { 00:16:57.152 "adrfam": "IPv4", 00:16:57.152 "ctrlr_loss_timeout_sec": 0, 00:16:57.152 "ddgst": false, 00:16:57.152 "fast_io_fail_timeout_sec": 0, 00:16:57.152 "hdgst": false, 00:16:57.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.152 "name": "TLSTEST", 00:16:57.152 "prchk_guard": false, 00:16:57.152 "prchk_reftag": false, 00:16:57.152 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:57.152 "reconnect_delay_sec": 0, 00:16:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.152 "traddr": "10.0.0.2", 00:16:57.152 "trsvcid": "4420", 00:16:57.152 "trtype": "TCP" 00:16:57.152 } 00:16:57.152 }, 00:16:57.152 { 00:16:57.152 "method": "bdev_nvme_set_hotplug", 00:16:57.152 "params": { 00:16:57.152 "enable": false, 00:16:57.152 "period_us": 100000 00:16:57.152 } 00:16:57.152 }, 00:16:57.152 { 00:16:57.152 "method": "bdev_wait_for_examine" 00:16:57.152 } 00:16:57.152 ] 00:16:57.152 }, 00:16:57.152 { 00:16:57.152 "subsystem": "nbd", 00:16:57.152 "config": [] 00:16:57.152 } 00:16:57.152 ] 00:16:57.152 }' 00:16:57.152 18:11:54 -- common/autotest_common.sh@10 -- # set +x 00:16:57.152 [2024-04-25 18:11:54.933806] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:57.152 [2024-04-25 18:11:54.933896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77294 ] 00:16:57.152 [2024-04-25 18:11:55.072998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.410 [2024-04-25 18:11:55.188448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.410 [2024-04-25 18:11:55.340477] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.976 18:11:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.976 18:11:55 -- common/autotest_common.sh@852 -- # return 0 00:16:57.976 18:11:55 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:58.235 Running I/O for 10 seconds... 00:17:08.213 00:17:08.213 Latency(us) 00:17:08.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.213 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:08.213 Verification LBA range: start 0x0 length 0x2000 00:17:08.213 TLSTESTn1 : 10.01 6467.38 25.26 0.00 0.00 19759.40 4498.15 19660.80 00:17:08.213 =================================================================================================================== 00:17:08.213 Total : 6467.38 25.26 0.00 0.00 19759.40 4498.15 19660.80 00:17:08.213 0 00:17:08.213 18:12:05 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.213 18:12:05 -- target/tls.sh@223 -- # killprocess 77294 00:17:08.213 18:12:05 -- common/autotest_common.sh@926 -- # '[' -z 77294 ']' 00:17:08.213 18:12:05 -- common/autotest_common.sh@930 -- # kill -0 77294 00:17:08.213 18:12:05 -- common/autotest_common.sh@931 -- # uname 00:17:08.213 18:12:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.213 18:12:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77294 00:17:08.213 18:12:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:08.213 18:12:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:08.213 killing process with pid 77294 00:17:08.213 18:12:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77294' 00:17:08.213 18:12:05 -- common/autotest_common.sh@945 -- # kill 77294 00:17:08.213 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.213 00:17:08.213 Latency(us) 00:17:08.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.213 =================================================================================================================== 00:17:08.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.213 18:12:05 -- common/autotest_common.sh@950 -- # wait 77294 00:17:08.472 18:12:06 -- target/tls.sh@224 -- # killprocess 77250 00:17:08.472 18:12:06 -- common/autotest_common.sh@926 -- # '[' -z 77250 ']' 00:17:08.472 18:12:06 -- common/autotest_common.sh@930 -- # kill -0 77250 00:17:08.472 18:12:06 -- common/autotest_common.sh@931 -- # uname 00:17:08.472 18:12:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.472 18:12:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77250 00:17:08.472 killing process with pid 77250 00:17:08.472 18:12:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:08.472 18:12:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:08.472 18:12:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77250' 00:17:08.472 18:12:06 -- common/autotest_common.sh@945 -- # kill 77250 00:17:08.472 18:12:06 -- common/autotest_common.sh@950 -- # wait 77250 00:17:08.730 18:12:06 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:08.730 18:12:06 -- target/tls.sh@227 -- # cleanup 00:17:08.730 18:12:06 -- target/tls.sh@15 -- # process_shm --id 0 00:17:08.730 18:12:06 -- common/autotest_common.sh@796 -- # type=--id 00:17:08.730 18:12:06 -- common/autotest_common.sh@797 -- # id=0 00:17:08.730 18:12:06 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:08.730 18:12:06 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:08.730 18:12:06 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:08.730 18:12:06 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:08.730 18:12:06 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:08.730 18:12:06 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:08.730 nvmf_trace.0 00:17:08.730 18:12:06 -- common/autotest_common.sh@811 -- # return 0 00:17:08.730 18:12:06 -- target/tls.sh@16 -- # killprocess 77294 00:17:08.730 18:12:06 -- common/autotest_common.sh@926 -- # '[' -z 77294 ']' 00:17:08.730 Process with pid 77294 is not found 00:17:08.730 18:12:06 -- common/autotest_common.sh@930 -- # kill -0 77294 00:17:08.731 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77294) - No such process 00:17:08.731 18:12:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77294 is not found' 00:17:08.731 18:12:06 -- target/tls.sh@17 -- # nvmftestfini 00:17:08.731 18:12:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:08.731 18:12:06 -- nvmf/common.sh@116 -- # sync 00:17:08.731 18:12:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:08.731 18:12:06 -- nvmf/common.sh@119 -- # set +e 00:17:08.731 18:12:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:08.731 18:12:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:08.731 rmmod nvme_tcp 00:17:08.731 rmmod nvme_fabrics 00:17:08.731 rmmod nvme_keyring 00:17:08.731 18:12:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:08.731 18:12:06 -- nvmf/common.sh@123 -- # set -e 00:17:08.731 18:12:06 -- nvmf/common.sh@124 -- # return 0 00:17:08.731 18:12:06 -- nvmf/common.sh@477 -- # '[' -n 77250 ']' 00:17:08.731 18:12:06 -- nvmf/common.sh@478 -- # killprocess 77250 00:17:08.731 18:12:06 -- common/autotest_common.sh@926 -- # '[' -z 77250 ']' 00:17:08.731 Process with pid 77250 is not found 00:17:08.731 18:12:06 -- common/autotest_common.sh@930 -- # kill -0 77250 00:17:08.731 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77250) - No such process 00:17:08.731 18:12:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77250 is not found' 00:17:08.731 18:12:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:08.731 18:12:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:08.731 18:12:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:08.731 18:12:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.731 18:12:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:08.731 18:12:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.731 18:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.731 18:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.731 18:12:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:08.731 18:12:06 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.731 ************************************ 00:17:08.731 END TEST nvmf_tls 00:17:08.731 ************************************ 00:17:08.731 00:17:08.731 real 1m9.652s 00:17:08.731 user 1m45.359s 00:17:08.731 sys 0m25.010s 00:17:08.731 18:12:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.731 18:12:06 -- common/autotest_common.sh@10 -- # set +x 00:17:08.990 18:12:06 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:08.990 18:12:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:08.990 18:12:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.990 18:12:06 -- common/autotest_common.sh@10 -- # set +x 00:17:08.990 ************************************ 00:17:08.990 START TEST nvmf_fips 00:17:08.990 ************************************ 00:17:08.990 18:12:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:08.990 * Looking for test storage... 00:17:08.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:08.990 18:12:06 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.990 18:12:06 -- nvmf/common.sh@7 -- # uname -s 00:17:08.990 18:12:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.990 18:12:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.990 18:12:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.990 18:12:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.990 18:12:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.990 18:12:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.990 18:12:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.990 18:12:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.990 18:12:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.990 18:12:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.990 18:12:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:08.990 18:12:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:08.990 18:12:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.990 18:12:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.990 18:12:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.990 18:12:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.990 18:12:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.990 18:12:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.990 18:12:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.990 18:12:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.990 18:12:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.990 18:12:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.990 18:12:06 -- paths/export.sh@5 -- # export PATH 00:17:08.990 18:12:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.990 18:12:06 -- nvmf/common.sh@46 -- # : 0 00:17:08.990 18:12:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:08.990 18:12:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:08.990 18:12:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:08.990 18:12:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.990 18:12:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.990 18:12:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:08.990 18:12:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:08.990 18:12:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:08.990 18:12:06 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.990 18:12:06 -- fips/fips.sh@89 -- # check_openssl_version 00:17:08.990 18:12:06 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:08.990 18:12:06 -- fips/fips.sh@85 -- # openssl version 00:17:08.990 18:12:06 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:08.990 18:12:06 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:08.990 18:12:06 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:08.990 18:12:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:08.990 18:12:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:08.990 18:12:06 -- scripts/common.sh@335 -- # IFS=.-: 00:17:08.990 18:12:06 -- scripts/common.sh@335 -- # read -ra ver1 00:17:08.990 18:12:06 -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.990 18:12:06 -- scripts/common.sh@336 -- # read -ra ver2 00:17:08.990 18:12:06 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:08.990 18:12:06 -- scripts/common.sh@339 -- # ver1_l=3 00:17:08.990 18:12:06 -- scripts/common.sh@340 -- # ver2_l=3 00:17:08.990 18:12:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:08.990 18:12:06 -- scripts/common.sh@343 -- # case "$op" in 00:17:08.990 18:12:06 -- scripts/common.sh@347 -- # : 1 00:17:08.990 18:12:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:08.990 18:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.990 18:12:06 -- scripts/common.sh@364 -- # decimal 3 00:17:08.990 18:12:06 -- scripts/common.sh@352 -- # local d=3 00:17:08.990 18:12:06 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:08.990 18:12:06 -- scripts/common.sh@354 -- # echo 3 00:17:08.990 18:12:06 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:08.990 18:12:06 -- scripts/common.sh@365 -- # decimal 3 00:17:08.990 18:12:06 -- scripts/common.sh@352 -- # local d=3 00:17:08.990 18:12:06 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:08.990 18:12:06 -- scripts/common.sh@354 -- # echo 3 00:17:08.990 18:12:06 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:08.990 18:12:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:08.990 18:12:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:08.990 18:12:06 -- scripts/common.sh@363 -- # (( v++ )) 00:17:08.990 18:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.990 18:12:06 -- scripts/common.sh@364 -- # decimal 0 00:17:08.990 18:12:06 -- scripts/common.sh@352 -- # local d=0 00:17:08.990 18:12:06 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:08.990 18:12:06 -- scripts/common.sh@354 -- # echo 0 00:17:08.990 18:12:06 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:08.990 18:12:06 -- scripts/common.sh@365 -- # decimal 0 00:17:08.990 18:12:06 -- scripts/common.sh@352 -- # local d=0 00:17:08.990 18:12:06 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:08.990 18:12:06 -- scripts/common.sh@354 -- # echo 0 00:17:08.990 18:12:06 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:08.990 18:12:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:08.990 18:12:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:08.991 18:12:06 -- scripts/common.sh@363 -- # (( v++ )) 00:17:08.991 18:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.991 18:12:06 -- scripts/common.sh@364 -- # decimal 9 00:17:08.991 18:12:06 -- scripts/common.sh@352 -- # local d=9 00:17:08.991 18:12:06 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:08.991 18:12:06 -- scripts/common.sh@354 -- # echo 9 00:17:08.991 18:12:06 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:08.991 18:12:06 -- scripts/common.sh@365 -- # decimal 0 00:17:08.991 18:12:06 -- scripts/common.sh@352 -- # local d=0 00:17:08.991 18:12:06 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:08.991 18:12:06 -- scripts/common.sh@354 -- # echo 0 00:17:08.991 18:12:06 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:08.991 18:12:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:08.991 18:12:06 -- scripts/common.sh@366 -- # return 0 00:17:08.991 18:12:06 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:08.991 18:12:06 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:08.991 18:12:06 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:08.991 18:12:06 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:08.991 18:12:06 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:08.991 18:12:06 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:08.991 18:12:06 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:08.991 18:12:06 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:08.991 18:12:06 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:08.991 18:12:06 -- fips/fips.sh@114 -- # build_openssl_config 00:17:08.991 18:12:06 -- fips/fips.sh@37 -- # cat 00:17:08.991 18:12:06 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:08.991 18:12:06 -- fips/fips.sh@58 -- # cat - 00:17:08.991 18:12:06 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:08.991 18:12:06 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:08.991 18:12:06 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:08.991 18:12:06 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:08.991 18:12:06 -- fips/fips.sh@117 -- # grep name 00:17:08.991 18:12:06 -- fips/fips.sh@117 -- # openssl list -providers 00:17:08.991 18:12:06 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:08.991 18:12:06 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:08.991 18:12:06 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:08.991 18:12:06 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:08.991 18:12:06 -- fips/fips.sh@128 -- # : 00:17:08.991 18:12:06 -- common/autotest_common.sh@640 -- # local es=0 00:17:08.991 18:12:06 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:08.991 18:12:06 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:08.991 18:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.991 18:12:06 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:08.991 18:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.991 18:12:06 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:08.991 18:12:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.991 18:12:06 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:08.991 18:12:06 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:08.991 18:12:06 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:09.250 Error setting digest 00:17:09.250 004226300E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:09.250 004226300E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:09.250 18:12:06 -- common/autotest_common.sh@643 -- # es=1 00:17:09.250 18:12:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:09.250 18:12:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:09.250 18:12:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:09.250 18:12:06 -- fips/fips.sh@131 -- # nvmftestinit 00:17:09.250 18:12:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:09.250 18:12:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.250 18:12:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.250 18:12:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.250 18:12:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.250 18:12:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.250 18:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.250 18:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.250 18:12:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:09.250 18:12:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:09.250 18:12:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:09.250 18:12:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:09.250 18:12:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:09.250 18:12:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:09.250 18:12:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.250 18:12:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.250 18:12:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:09.250 18:12:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:09.250 18:12:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.250 18:12:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.250 18:12:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.250 18:12:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.250 18:12:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.250 18:12:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.250 18:12:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.250 18:12:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.250 18:12:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:09.250 18:12:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:09.250 Cannot find device "nvmf_tgt_br" 00:17:09.250 18:12:06 -- nvmf/common.sh@154 -- # true 00:17:09.250 18:12:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.250 Cannot find device "nvmf_tgt_br2" 00:17:09.250 18:12:07 -- nvmf/common.sh@155 -- # true 00:17:09.250 18:12:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:09.250 18:12:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:09.250 Cannot find device "nvmf_tgt_br" 00:17:09.250 18:12:07 -- nvmf/common.sh@157 -- # true 00:17:09.250 18:12:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:09.250 Cannot find device "nvmf_tgt_br2" 00:17:09.250 18:12:07 -- nvmf/common.sh@158 -- # true 00:17:09.250 18:12:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:09.250 18:12:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:09.250 18:12:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.250 18:12:07 -- nvmf/common.sh@161 -- # true 00:17:09.250 18:12:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.250 18:12:07 -- nvmf/common.sh@162 -- # true 00:17:09.250 18:12:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.250 18:12:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.250 18:12:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.250 18:12:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.250 18:12:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.250 18:12:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.250 18:12:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.250 18:12:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.250 18:12:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:09.250 18:12:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:09.250 18:12:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:09.250 18:12:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:09.509 18:12:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:09.509 18:12:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.509 18:12:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.509 18:12:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.510 18:12:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:09.510 18:12:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:09.510 18:12:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.510 18:12:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.510 18:12:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.510 18:12:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.510 18:12:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.510 18:12:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:09.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:09.510 00:17:09.510 --- 10.0.0.2 ping statistics --- 00:17:09.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.510 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:09.510 18:12:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:09.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:09.510 00:17:09.510 --- 10.0.0.3 ping statistics --- 00:17:09.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.510 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:09.510 18:12:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:09.510 00:17:09.510 --- 10.0.0.1 ping statistics --- 00:17:09.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.510 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:09.510 18:12:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.510 18:12:07 -- nvmf/common.sh@421 -- # return 0 00:17:09.510 18:12:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:09.510 18:12:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.510 18:12:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:09.510 18:12:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:09.510 18:12:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.510 18:12:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:09.510 18:12:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:09.510 18:12:07 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:09.510 18:12:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:09.510 18:12:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:09.510 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:17:09.510 18:12:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:09.510 18:12:07 -- nvmf/common.sh@469 -- # nvmfpid=77652 00:17:09.510 18:12:07 -- nvmf/common.sh@470 -- # waitforlisten 77652 00:17:09.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.510 18:12:07 -- common/autotest_common.sh@819 -- # '[' -z 77652 ']' 00:17:09.510 18:12:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.510 18:12:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.510 18:12:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.510 18:12:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.510 18:12:07 -- common/autotest_common.sh@10 -- # set +x 00:17:09.510 [2024-04-25 18:12:07.397978] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:09.510 [2024-04-25 18:12:07.398064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.770 [2024-04-25 18:12:07.535837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.770 [2024-04-25 18:12:07.606047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.770 [2024-04-25 18:12:07.606177] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.770 [2024-04-25 18:12:07.606189] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.770 [2024-04-25 18:12:07.606196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.770 [2024-04-25 18:12:07.606223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.339 18:12:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.339 18:12:08 -- common/autotest_common.sh@852 -- # return 0 00:17:10.339 18:12:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:10.339 18:12:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:10.339 18:12:08 -- common/autotest_common.sh@10 -- # set +x 00:17:10.599 18:12:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.599 18:12:08 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:10.599 18:12:08 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:10.599 18:12:08 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:10.599 18:12:08 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:10.599 18:12:08 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:10.599 18:12:08 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:10.599 18:12:08 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:10.599 18:12:08 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.858 [2024-04-25 18:12:08.550166] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.858 [2024-04-25 18:12:08.566105] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.858 [2024-04-25 18:12:08.566291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.858 malloc0 00:17:10.858 18:12:08 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.858 18:12:08 -- fips/fips.sh@148 -- # bdevperf_pid=77710 00:17:10.858 18:12:08 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.858 18:12:08 -- fips/fips.sh@149 -- # waitforlisten 77710 /var/tmp/bdevperf.sock 00:17:10.858 18:12:08 -- common/autotest_common.sh@819 -- # '[' -z 77710 ']' 00:17:10.858 18:12:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.858 18:12:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.858 18:12:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.858 18:12:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.858 18:12:08 -- common/autotest_common.sh@10 -- # set +x 00:17:10.858 [2024-04-25 18:12:08.708373] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:10.858 [2024-04-25 18:12:08.708459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77710 ] 00:17:11.117 [2024-04-25 18:12:08.847878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.117 [2024-04-25 18:12:08.938911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.685 18:12:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.685 18:12:09 -- common/autotest_common.sh@852 -- # return 0 00:17:11.685 18:12:09 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:11.944 [2024-04-25 18:12:09.832516] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.202 TLSTESTn1 00:17:12.202 18:12:09 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.202 Running I/O for 10 seconds... 00:17:22.176 00:17:22.176 Latency(us) 00:17:22.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.176 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:22.176 Verification LBA range: start 0x0 length 0x2000 00:17:22.176 TLSTESTn1 : 10.01 6855.71 26.78 0.00 0.00 18645.37 2159.71 20256.58 00:17:22.176 =================================================================================================================== 00:17:22.176 Total : 6855.71 26.78 0.00 0.00 18645.37 2159.71 20256.58 00:17:22.176 0 00:17:22.176 18:12:20 -- fips/fips.sh@1 -- # cleanup 00:17:22.176 18:12:20 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:22.176 18:12:20 -- common/autotest_common.sh@796 -- # type=--id 00:17:22.176 18:12:20 -- common/autotest_common.sh@797 -- # id=0 00:17:22.176 18:12:20 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:22.176 18:12:20 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:22.176 18:12:20 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:22.176 18:12:20 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:22.176 18:12:20 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:22.176 18:12:20 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:22.176 nvmf_trace.0 00:17:22.176 18:12:20 -- common/autotest_common.sh@811 -- # return 0 00:17:22.176 18:12:20 -- fips/fips.sh@16 -- # killprocess 77710 00:17:22.176 18:12:20 -- common/autotest_common.sh@926 -- # '[' -z 77710 ']' 00:17:22.176 18:12:20 -- common/autotest_common.sh@930 -- # kill -0 77710 00:17:22.435 18:12:20 -- common/autotest_common.sh@931 -- # uname 00:17:22.435 18:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.435 18:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77710 00:17:22.435 18:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:22.435 18:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:22.435 killing process with pid 77710 00:17:22.435 18:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77710' 00:17:22.435 18:12:20 -- common/autotest_common.sh@945 -- # kill 77710 00:17:22.435 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.435 00:17:22.435 Latency(us) 00:17:22.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.435 =================================================================================================================== 00:17:22.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.435 18:12:20 -- common/autotest_common.sh@950 -- # wait 77710 00:17:22.435 18:12:20 -- fips/fips.sh@17 -- # nvmftestfini 00:17:22.435 18:12:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:22.435 18:12:20 -- nvmf/common.sh@116 -- # sync 00:17:22.694 18:12:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:22.694 18:12:20 -- nvmf/common.sh@119 -- # set +e 00:17:22.694 18:12:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:22.694 18:12:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:22.694 rmmod nvme_tcp 00:17:22.694 rmmod nvme_fabrics 00:17:22.694 rmmod nvme_keyring 00:17:22.694 18:12:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:22.694 18:12:20 -- nvmf/common.sh@123 -- # set -e 00:17:22.694 18:12:20 -- nvmf/common.sh@124 -- # return 0 00:17:22.694 18:12:20 -- nvmf/common.sh@477 -- # '[' -n 77652 ']' 00:17:22.694 18:12:20 -- nvmf/common.sh@478 -- # killprocess 77652 00:17:22.694 18:12:20 -- common/autotest_common.sh@926 -- # '[' -z 77652 ']' 00:17:22.694 18:12:20 -- common/autotest_common.sh@930 -- # kill -0 77652 00:17:22.694 18:12:20 -- common/autotest_common.sh@931 -- # uname 00:17:22.694 18:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.694 18:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77652 00:17:22.694 18:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.694 18:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.694 18:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77652' 00:17:22.694 killing process with pid 77652 00:17:22.694 18:12:20 -- common/autotest_common.sh@945 -- # kill 77652 00:17:22.694 18:12:20 -- common/autotest_common.sh@950 -- # wait 77652 00:17:22.953 18:12:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:22.953 18:12:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:22.953 18:12:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:22.953 18:12:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.953 18:12:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:22.953 18:12:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.953 18:12:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.953 18:12:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.953 18:12:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:22.953 18:12:20 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:22.953 00:17:22.953 real 0m14.072s 00:17:22.953 user 0m18.791s 00:17:22.953 sys 0m5.819s 00:17:22.953 18:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.953 18:12:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.953 ************************************ 00:17:22.953 END TEST nvmf_fips 00:17:22.953 ************************************ 00:17:22.953 18:12:20 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:22.953 18:12:20 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:22.953 18:12:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:22.953 18:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.953 18:12:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.953 ************************************ 00:17:22.953 START TEST nvmf_fuzz 00:17:22.953 ************************************ 00:17:22.953 18:12:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:22.953 * Looking for test storage... 00:17:22.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:22.953 18:12:20 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.953 18:12:20 -- nvmf/common.sh@7 -- # uname -s 00:17:22.953 18:12:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.953 18:12:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.953 18:12:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.953 18:12:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.953 18:12:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.953 18:12:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.953 18:12:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.953 18:12:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.213 18:12:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.213 18:12:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.213 18:12:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:23.213 18:12:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:23.213 18:12:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.213 18:12:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.213 18:12:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.213 18:12:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.213 18:12:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.213 18:12:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.213 18:12:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.213 18:12:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.213 18:12:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.213 18:12:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.213 18:12:20 -- paths/export.sh@5 -- # export PATH 00:17:23.213 18:12:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.213 18:12:20 -- nvmf/common.sh@46 -- # : 0 00:17:23.213 18:12:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:23.213 18:12:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:23.213 18:12:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:23.213 18:12:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.213 18:12:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.213 18:12:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:23.213 18:12:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:23.213 18:12:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:23.213 18:12:20 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:23.213 18:12:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:23.213 18:12:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.213 18:12:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:23.213 18:12:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:23.213 18:12:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:23.213 18:12:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.213 18:12:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.213 18:12:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.213 18:12:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:23.213 18:12:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:23.213 18:12:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:23.213 18:12:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:23.213 18:12:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:23.213 18:12:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:23.213 18:12:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.213 18:12:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.213 18:12:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:23.213 18:12:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:23.213 18:12:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.213 18:12:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.213 18:12:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.213 18:12:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.213 18:12:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.213 18:12:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.213 18:12:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.213 18:12:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.213 18:12:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:23.213 18:12:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:23.213 Cannot find device "nvmf_tgt_br" 00:17:23.213 18:12:20 -- nvmf/common.sh@154 -- # true 00:17:23.213 18:12:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.213 Cannot find device "nvmf_tgt_br2" 00:17:23.213 18:12:20 -- nvmf/common.sh@155 -- # true 00:17:23.213 18:12:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:23.213 18:12:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:23.213 Cannot find device "nvmf_tgt_br" 00:17:23.213 18:12:20 -- nvmf/common.sh@157 -- # true 00:17:23.213 18:12:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:23.213 Cannot find device "nvmf_tgt_br2" 00:17:23.213 18:12:20 -- nvmf/common.sh@158 -- # true 00:17:23.213 18:12:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:23.213 18:12:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:23.213 18:12:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.213 18:12:21 -- nvmf/common.sh@161 -- # true 00:17:23.213 18:12:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.213 18:12:21 -- nvmf/common.sh@162 -- # true 00:17:23.213 18:12:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.213 18:12:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.213 18:12:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.213 18:12:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.213 18:12:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.213 18:12:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.213 18:12:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.213 18:12:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:23.213 18:12:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:23.213 18:12:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:23.213 18:12:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:23.213 18:12:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:23.213 18:12:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:23.213 18:12:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.213 18:12:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.472 18:12:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.472 18:12:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:23.472 18:12:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:23.472 18:12:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.472 18:12:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.472 18:12:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.472 18:12:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.472 18:12:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.472 18:12:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:23.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:23.472 00:17:23.472 --- 10.0.0.2 ping statistics --- 00:17:23.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.472 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:23.472 18:12:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:23.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:23.472 00:17:23.472 --- 10.0.0.3 ping statistics --- 00:17:23.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.472 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:23.472 18:12:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:23.472 00:17:23.472 --- 10.0.0.1 ping statistics --- 00:17:23.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.472 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:23.472 18:12:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.472 18:12:21 -- nvmf/common.sh@421 -- # return 0 00:17:23.472 18:12:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:23.472 18:12:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.473 18:12:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:23.473 18:12:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:23.473 18:12:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.473 18:12:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:23.473 18:12:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:23.473 18:12:21 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78045 00:17:23.473 18:12:21 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:23.473 18:12:21 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:23.473 18:12:21 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78045 00:17:23.473 18:12:21 -- common/autotest_common.sh@819 -- # '[' -z 78045 ']' 00:17:23.473 18:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.473 18:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:23.473 18:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.473 18:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:23.473 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.436 18:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.436 18:12:22 -- common/autotest_common.sh@852 -- # return 0 00:17:24.436 18:12:22 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.436 18:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.436 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.436 18:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.436 18:12:22 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:24.436 18:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.436 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.436 Malloc0 00:17:24.436 18:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.436 18:12:22 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.436 18:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.436 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.436 18:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.436 18:12:22 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.436 18:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.436 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.436 18:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.436 18:12:22 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.436 18:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.436 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.695 18:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.695 18:12:22 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:24.695 18:12:22 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:24.954 Shutting down the fuzz application 00:17:24.954 18:12:22 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:25.212 Shutting down the fuzz application 00:17:25.212 18:12:23 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.212 18:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.212 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.212 18:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:25.212 18:12:23 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:25.212 18:12:23 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:25.212 18:12:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:25.212 18:12:23 -- nvmf/common.sh@116 -- # sync 00:17:25.212 18:12:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:25.212 18:12:23 -- nvmf/common.sh@119 -- # set +e 00:17:25.212 18:12:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:25.212 18:12:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:25.212 rmmod nvme_tcp 00:17:25.212 rmmod nvme_fabrics 00:17:25.212 rmmod nvme_keyring 00:17:25.212 18:12:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:25.212 18:12:23 -- nvmf/common.sh@123 -- # set -e 00:17:25.212 18:12:23 -- nvmf/common.sh@124 -- # return 0 00:17:25.212 18:12:23 -- nvmf/common.sh@477 -- # '[' -n 78045 ']' 00:17:25.213 18:12:23 -- nvmf/common.sh@478 -- # killprocess 78045 00:17:25.213 18:12:23 -- common/autotest_common.sh@926 -- # '[' -z 78045 ']' 00:17:25.213 18:12:23 -- common/autotest_common.sh@930 -- # kill -0 78045 00:17:25.213 18:12:23 -- common/autotest_common.sh@931 -- # uname 00:17:25.213 18:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.213 18:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78045 00:17:25.472 killing process with pid 78045 00:17:25.472 18:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:25.472 18:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:25.472 18:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78045' 00:17:25.472 18:12:23 -- common/autotest_common.sh@945 -- # kill 78045 00:17:25.472 18:12:23 -- common/autotest_common.sh@950 -- # wait 78045 00:17:25.472 18:12:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:25.472 18:12:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:25.472 18:12:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:25.472 18:12:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.472 18:12:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:25.472 18:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.472 18:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.472 18:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.731 18:12:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:25.731 18:12:23 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:25.731 00:17:25.731 real 0m2.633s 00:17:25.731 user 0m2.786s 00:17:25.731 sys 0m0.625s 00:17:25.731 18:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.731 ************************************ 00:17:25.731 END TEST nvmf_fuzz 00:17:25.731 ************************************ 00:17:25.731 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.731 18:12:23 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:25.731 18:12:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:25.731 18:12:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:25.731 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.731 ************************************ 00:17:25.731 START TEST nvmf_multiconnection 00:17:25.731 ************************************ 00:17:25.731 18:12:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:25.731 * Looking for test storage... 00:17:25.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:25.731 18:12:23 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.731 18:12:23 -- nvmf/common.sh@7 -- # uname -s 00:17:25.731 18:12:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.731 18:12:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.731 18:12:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.731 18:12:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.731 18:12:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.731 18:12:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.731 18:12:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.731 18:12:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.731 18:12:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.731 18:12:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:25.731 18:12:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:17:25.731 18:12:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.731 18:12:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.731 18:12:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.731 18:12:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.731 18:12:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.731 18:12:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.731 18:12:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.731 18:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.731 18:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.731 18:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.731 18:12:23 -- paths/export.sh@5 -- # export PATH 00:17:25.731 18:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.731 18:12:23 -- nvmf/common.sh@46 -- # : 0 00:17:25.731 18:12:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:25.731 18:12:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:25.731 18:12:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:25.731 18:12:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.731 18:12:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.731 18:12:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:25.731 18:12:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:25.731 18:12:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:25.731 18:12:23 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.731 18:12:23 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.731 18:12:23 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:25.731 18:12:23 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:25.731 18:12:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:25.731 18:12:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.731 18:12:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:25.731 18:12:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:25.731 18:12:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:25.731 18:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.731 18:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.731 18:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.731 18:12:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:25.731 18:12:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:25.731 18:12:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.731 18:12:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.731 18:12:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.731 18:12:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:25.731 18:12:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.732 18:12:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.732 18:12:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.732 18:12:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.732 18:12:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.732 18:12:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.732 18:12:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.732 18:12:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.732 18:12:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:25.732 18:12:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:25.732 Cannot find device "nvmf_tgt_br" 00:17:25.732 18:12:23 -- nvmf/common.sh@154 -- # true 00:17:25.732 18:12:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.732 Cannot find device "nvmf_tgt_br2" 00:17:25.732 18:12:23 -- nvmf/common.sh@155 -- # true 00:17:25.732 18:12:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:25.732 18:12:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:25.732 Cannot find device "nvmf_tgt_br" 00:17:25.732 18:12:23 -- nvmf/common.sh@157 -- # true 00:17:25.732 18:12:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:25.991 Cannot find device "nvmf_tgt_br2" 00:17:25.991 18:12:23 -- nvmf/common.sh@158 -- # true 00:17:25.991 18:12:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:25.991 18:12:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:25.991 18:12:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.991 18:12:23 -- nvmf/common.sh@161 -- # true 00:17:25.991 18:12:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.991 18:12:23 -- nvmf/common.sh@162 -- # true 00:17:25.991 18:12:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.991 18:12:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.991 18:12:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.991 18:12:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.991 18:12:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.991 18:12:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.991 18:12:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.991 18:12:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.991 18:12:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:25.991 18:12:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:25.991 18:12:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:25.991 18:12:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:25.991 18:12:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:25.991 18:12:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.991 18:12:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.991 18:12:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.991 18:12:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:25.991 18:12:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:25.991 18:12:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.991 18:12:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.991 18:12:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.991 18:12:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.250 18:12:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.250 18:12:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:26.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:26.250 00:17:26.250 --- 10.0.0.2 ping statistics --- 00:17:26.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.250 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:26.250 18:12:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:26.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:26.250 00:17:26.250 --- 10.0.0.3 ping statistics --- 00:17:26.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.250 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:26.250 18:12:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:26.250 00:17:26.250 --- 10.0.0.1 ping statistics --- 00:17:26.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:26.250 18:12:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.250 18:12:23 -- nvmf/common.sh@421 -- # return 0 00:17:26.250 18:12:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:26.250 18:12:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.250 18:12:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:26.250 18:12:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:26.250 18:12:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.250 18:12:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:26.250 18:12:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:26.250 18:12:23 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:26.250 18:12:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:26.250 18:12:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:26.250 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.250 18:12:23 -- nvmf/common.sh@469 -- # nvmfpid=78251 00:17:26.250 18:12:23 -- nvmf/common.sh@470 -- # waitforlisten 78251 00:17:26.250 18:12:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.250 18:12:23 -- common/autotest_common.sh@819 -- # '[' -z 78251 ']' 00:17:26.250 18:12:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.250 18:12:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.250 18:12:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.250 18:12:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.250 18:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.250 [2024-04-25 18:12:24.031336] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:26.250 [2024-04-25 18:12:24.031425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.250 [2024-04-25 18:12:24.173853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.509 [2024-04-25 18:12:24.283405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:26.509 [2024-04-25 18:12:24.283728] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.509 [2024-04-25 18:12:24.283908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.509 [2024-04-25 18:12:24.284070] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.509 [2024-04-25 18:12:24.284324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.509 [2024-04-25 18:12:24.284473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.509 [2024-04-25 18:12:24.284473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.509 [2024-04-25 18:12:24.284403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.076 18:12:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.076 18:12:24 -- common/autotest_common.sh@852 -- # return 0 00:17:27.076 18:12:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:27.076 18:12:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:27.076 18:12:24 -- common/autotest_common.sh@10 -- # set +x 00:17:27.335 18:12:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.336 18:12:25 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 [2024-04-25 18:12:25.047896] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:27.336 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.336 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 Malloc1 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 [2024-04-25 18:12:25.135989] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.336 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 Malloc2 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.336 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 Malloc3 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.336 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 Malloc4 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.336 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.336 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:27.336 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.336 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.596 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 Malloc5 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.596 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 Malloc6 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.596 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 Malloc7 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.596 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 Malloc8 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.596 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.596 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.596 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:27.596 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.596 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.859 Malloc9 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.860 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 Malloc10 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.860 18:12:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 Malloc11 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:27.860 18:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.860 18:12:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.860 18:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.860 18:12:25 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:27.860 18:12:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:27.860 18:12:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.121 18:12:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:28.121 18:12:25 -- common/autotest_common.sh@1177 -- # local i=0 00:17:28.121 18:12:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.121 18:12:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:28.121 18:12:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:30.021 18:12:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:30.021 18:12:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:30.021 18:12:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:17:30.021 18:12:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:30.021 18:12:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.021 18:12:27 -- common/autotest_common.sh@1187 -- # return 0 00:17:30.021 18:12:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.021 18:12:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:30.280 18:12:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:30.280 18:12:28 -- common/autotest_common.sh@1177 -- # local i=0 00:17:30.280 18:12:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.280 18:12:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:30.280 18:12:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:32.207 18:12:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:32.207 18:12:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:32.207 18:12:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:17:32.207 18:12:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:32.207 18:12:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.207 18:12:30 -- common/autotest_common.sh@1187 -- # return 0 00:17:32.207 18:12:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:32.207 18:12:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:32.466 18:12:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:32.466 18:12:30 -- common/autotest_common.sh@1177 -- # local i=0 00:17:32.466 18:12:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.466 18:12:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:32.466 18:12:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:34.367 18:12:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:34.367 18:12:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:34.367 18:12:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:17:34.367 18:12:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:34.367 18:12:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.367 18:12:32 -- common/autotest_common.sh@1187 -- # return 0 00:17:34.367 18:12:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:34.367 18:12:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:34.625 18:12:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:34.625 18:12:32 -- common/autotest_common.sh@1177 -- # local i=0 00:17:34.625 18:12:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.625 18:12:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:34.625 18:12:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:36.550 18:12:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:36.550 18:12:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:36.550 18:12:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:17:36.550 18:12:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:36.550 18:12:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.550 18:12:34 -- common/autotest_common.sh@1187 -- # return 0 00:17:36.550 18:12:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:36.550 18:12:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:17:36.809 18:12:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:36.809 18:12:34 -- common/autotest_common.sh@1177 -- # local i=0 00:17:36.809 18:12:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.809 18:12:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:36.809 18:12:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:38.712 18:12:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:38.712 18:12:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:38.712 18:12:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:17:38.712 18:12:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:38.712 18:12:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.712 18:12:36 -- common/autotest_common.sh@1187 -- # return 0 00:17:38.712 18:12:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:38.712 18:12:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:17:38.971 18:12:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:38.971 18:12:36 -- common/autotest_common.sh@1177 -- # local i=0 00:17:38.971 18:12:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:38.971 18:12:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:38.971 18:12:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:41.506 18:12:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:41.506 18:12:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:41.506 18:12:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:17:41.506 18:12:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:41.506 18:12:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.506 18:12:38 -- common/autotest_common.sh@1187 -- # return 0 00:17:41.506 18:12:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:41.506 18:12:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:17:41.506 18:12:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:17:41.506 18:12:39 -- common/autotest_common.sh@1177 -- # local i=0 00:17:41.506 18:12:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.506 18:12:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:41.506 18:12:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:43.410 18:12:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:43.410 18:12:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:43.410 18:12:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:17:43.410 18:12:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:43.410 18:12:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.410 18:12:41 -- common/autotest_common.sh@1187 -- # return 0 00:17:43.410 18:12:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:43.410 18:12:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:17:43.410 18:12:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:17:43.410 18:12:41 -- common/autotest_common.sh@1177 -- # local i=0 00:17:43.410 18:12:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.410 18:12:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:43.410 18:12:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:45.315 18:12:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:45.315 18:12:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:45.315 18:12:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:17:45.315 18:12:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:45.315 18:12:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.315 18:12:43 -- common/autotest_common.sh@1187 -- # return 0 00:17:45.315 18:12:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.316 18:12:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:17:45.574 18:12:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:17:45.574 18:12:43 -- common/autotest_common.sh@1177 -- # local i=0 00:17:45.574 18:12:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.574 18:12:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:45.574 18:12:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:48.166 18:12:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:48.167 18:12:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:48.167 18:12:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:17:48.167 18:12:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:48.167 18:12:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.167 18:12:45 -- common/autotest_common.sh@1187 -- # return 0 00:17:48.167 18:12:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:48.167 18:12:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:17:48.167 18:12:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:17:48.167 18:12:45 -- common/autotest_common.sh@1177 -- # local i=0 00:17:48.167 18:12:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.167 18:12:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:48.167 18:12:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:50.070 18:12:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:50.070 18:12:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:50.070 18:12:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:17:50.070 18:12:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:50.070 18:12:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.070 18:12:47 -- common/autotest_common.sh@1187 -- # return 0 00:17:50.070 18:12:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:50.070 18:12:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:17:50.070 18:12:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:17:50.070 18:12:47 -- common/autotest_common.sh@1177 -- # local i=0 00:17:50.070 18:12:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.070 18:12:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:50.070 18:12:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:51.974 18:12:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:51.974 18:12:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:51.974 18:12:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:17:51.974 18:12:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:51.974 18:12:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.974 18:12:49 -- common/autotest_common.sh@1187 -- # return 0 00:17:51.974 18:12:49 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:17:51.974 [global] 00:17:51.974 thread=1 00:17:51.974 invalidate=1 00:17:51.974 rw=read 00:17:51.974 time_based=1 00:17:51.974 runtime=10 00:17:51.974 ioengine=libaio 00:17:51.974 direct=1 00:17:51.974 bs=262144 00:17:51.974 iodepth=64 00:17:51.974 norandommap=1 00:17:51.974 numjobs=1 00:17:51.974 00:17:51.974 [job0] 00:17:51.974 filename=/dev/nvme0n1 00:17:51.974 [job1] 00:17:51.974 filename=/dev/nvme10n1 00:17:51.974 [job2] 00:17:51.974 filename=/dev/nvme1n1 00:17:51.974 [job3] 00:17:51.974 filename=/dev/nvme2n1 00:17:51.974 [job4] 00:17:51.974 filename=/dev/nvme3n1 00:17:51.974 [job5] 00:17:51.974 filename=/dev/nvme4n1 00:17:51.974 [job6] 00:17:51.974 filename=/dev/nvme5n1 00:17:51.974 [job7] 00:17:51.974 filename=/dev/nvme6n1 00:17:51.974 [job8] 00:17:51.974 filename=/dev/nvme7n1 00:17:52.232 [job9] 00:17:52.232 filename=/dev/nvme8n1 00:17:52.232 [job10] 00:17:52.232 filename=/dev/nvme9n1 00:17:52.232 Could not set queue depth (nvme0n1) 00:17:52.232 Could not set queue depth (nvme10n1) 00:17:52.232 Could not set queue depth (nvme1n1) 00:17:52.232 Could not set queue depth (nvme2n1) 00:17:52.232 Could not set queue depth (nvme3n1) 00:17:52.232 Could not set queue depth (nvme4n1) 00:17:52.232 Could not set queue depth (nvme5n1) 00:17:52.232 Could not set queue depth (nvme6n1) 00:17:52.232 Could not set queue depth (nvme7n1) 00:17:52.232 Could not set queue depth (nvme8n1) 00:17:52.232 Could not set queue depth (nvme9n1) 00:17:52.491 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:52.491 fio-3.35 00:17:52.491 Starting 11 threads 00:18:04.693 00:18:04.693 job0: (groupid=0, jobs=1): err= 0: pid=78726: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=495, BW=124MiB/s (130MB/s)(1251MiB/10098msec) 00:18:04.693 slat (usec): min=16, max=99275, avg=1999.02, stdev=7970.99 00:18:04.693 clat (msec): min=54, max=245, avg=126.98, stdev=18.84 00:18:04.693 lat (msec): min=54, max=245, avg=128.98, stdev=20.02 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 92], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:18:04.693 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 128], 00:18:04.693 | 70.00th=[ 132], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 155], 00:18:04.693 | 99.00th=[ 203], 99.50th=[ 213], 99.90th=[ 245], 99.95th=[ 245], 00:18:04.693 | 99.99th=[ 245] 00:18:04.693 bw ( KiB/s): min=78690, max=141824, per=10.91%, avg=126353.15, stdev=13446.84, samples=20 00:18:04.693 iops : min= 307, max= 554, avg=493.45, stdev=52.55, samples=20 00:18:04.693 lat (msec) : 100=2.96%, 250=97.04% 00:18:04.693 cpu : usr=0.22%, sys=1.90%, ctx=787, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job1: (groupid=0, jobs=1): err= 0: pid=78728: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=467, BW=117MiB/s (123MB/s)(1181MiB/10102msec) 00:18:04.693 slat (usec): min=21, max=66755, avg=2120.09, stdev=7836.93 00:18:04.693 clat (msec): min=14, max=198, avg=134.61, stdev=17.25 00:18:04.693 lat (msec): min=15, max=219, avg=136.73, stdev=18.58 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 94], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 122], 00:18:04.693 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 136], 60.00th=[ 140], 00:18:04.693 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 161], 00:18:04.693 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 199], 00:18:04.693 | 99.99th=[ 199] 00:18:04.693 bw ( KiB/s): min=103217, max=127488, per=10.29%, avg=119175.75, stdev=6812.86, samples=20 00:18:04.693 iops : min= 403, max= 498, avg=465.50, stdev=26.65, samples=20 00:18:04.693 lat (msec) : 20=0.13%, 50=0.13%, 100=0.91%, 250=98.84% 00:18:04.693 cpu : usr=0.21%, sys=1.85%, ctx=457, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=4723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job2: (groupid=0, jobs=1): err= 0: pid=78734: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=328, BW=82.1MiB/s (86.1MB/s)(833MiB/10149msec) 00:18:04.693 slat (usec): min=21, max=174288, avg=3009.08, stdev=14491.61 00:18:04.693 clat (msec): min=48, max=341, avg=191.67, stdev=18.49 00:18:04.693 lat (msec): min=48, max=385, avg=194.68, stdev=23.12 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:18:04.693 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:18:04.693 | 70.00th=[ 199], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 218], 00:18:04.693 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 342], 00:18:04.693 | 99.99th=[ 342] 00:18:04.693 bw ( KiB/s): min=64512, max=96768, per=7.22%, avg=83642.70, stdev=9916.86, samples=20 00:18:04.693 iops : min= 252, max= 378, avg=326.60, stdev=38.75, samples=20 00:18:04.693 lat (msec) : 50=0.15%, 250=98.59%, 500=1.26% 00:18:04.693 cpu : usr=0.13%, sys=1.25%, ctx=573, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=3332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job3: (groupid=0, jobs=1): err= 0: pid=78736: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=327, BW=81.8MiB/s (85.8MB/s)(831MiB/10154msec) 00:18:04.693 slat (usec): min=21, max=138780, avg=3008.33, stdev=12350.14 00:18:04.693 clat (msec): min=22, max=342, avg=192.16, stdev=23.99 00:18:04.693 lat (msec): min=24, max=342, avg=195.17, stdev=26.92 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 105], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 180], 00:18:04.693 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 197], 00:18:04.693 | 70.00th=[ 201], 80.00th=[ 207], 90.00th=[ 215], 95.00th=[ 222], 00:18:04.693 | 99.00th=[ 245], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 342], 00:18:04.693 | 99.99th=[ 342] 00:18:04.693 bw ( KiB/s): min=64512, max=97280, per=7.20%, avg=83397.15, stdev=10248.13, samples=20 00:18:04.693 iops : min= 252, max= 380, avg=325.65, stdev=40.05, samples=20 00:18:04.693 lat (msec) : 50=0.33%, 100=0.21%, 250=98.50%, 500=0.96% 00:18:04.693 cpu : usr=0.27%, sys=1.08%, ctx=633, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=3324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job4: (groupid=0, jobs=1): err= 0: pid=78737: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=326, BW=81.6MiB/s (85.6MB/s)(829MiB/10156msec) 00:18:04.693 slat (usec): min=16, max=131490, avg=3021.37, stdev=12640.14 00:18:04.693 clat (msec): min=22, max=310, avg=192.73, stdev=27.24 00:18:04.693 lat (msec): min=22, max=350, avg=195.75, stdev=29.97 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 54], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:18:04.693 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 199], 00:18:04.693 | 70.00th=[ 203], 80.00th=[ 207], 90.00th=[ 215], 95.00th=[ 224], 00:18:04.693 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 309], 00:18:04.693 | 99.99th=[ 309] 00:18:04.693 bw ( KiB/s): min=67072, max=96768, per=7.18%, avg=83156.55, stdev=8261.07, samples=20 00:18:04.693 iops : min= 262, max= 378, avg=324.70, stdev=32.23, samples=20 00:18:04.693 lat (msec) : 50=0.54%, 100=1.51%, 250=96.68%, 500=1.27% 00:18:04.693 cpu : usr=0.12%, sys=1.29%, ctx=598, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=3315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job5: (groupid=0, jobs=1): err= 0: pid=78738: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=318, BW=79.6MiB/s (83.4MB/s)(808MiB/10155msec) 00:18:04.693 slat (usec): min=18, max=123354, avg=3089.07, stdev=11517.96 00:18:04.693 clat (msec): min=21, max=344, avg=197.64, stdev=28.72 00:18:04.693 lat (msec): min=21, max=344, avg=200.73, stdev=30.72 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 81], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:18:04.693 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 203], 00:18:04.693 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 220], 95.00th=[ 230], 00:18:04.693 | 99.00th=[ 284], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:18:04.693 | 99.99th=[ 347] 00:18:04.693 bw ( KiB/s): min=73580, max=93184, per=7.00%, avg=81069.55, stdev=6887.95, samples=20 00:18:04.693 iops : min= 287, max= 364, avg=316.50, stdev=26.84, samples=20 00:18:04.693 lat (msec) : 50=0.46%, 100=1.67%, 250=95.54%, 500=2.32% 00:18:04.693 cpu : usr=0.14%, sys=1.26%, ctx=593, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.693 issued rwts: total=3232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.693 job6: (groupid=0, jobs=1): err= 0: pid=78739: Thu Apr 25 18:13:00 2024 00:18:04.693 read: IOPS=315, BW=78.8MiB/s (82.6MB/s)(800MiB/10152msec) 00:18:04.693 slat (usec): min=21, max=128984, avg=3126.19, stdev=11903.52 00:18:04.693 clat (msec): min=68, max=363, avg=199.57, stdev=23.49 00:18:04.693 lat (msec): min=68, max=363, avg=202.69, stdev=26.04 00:18:04.693 clat percentiles (msec): 00:18:04.693 | 1.00th=[ 133], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 186], 00:18:04.693 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 203], 00:18:04.693 | 70.00th=[ 207], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 228], 00:18:04.693 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 347], 99.95th=[ 363], 00:18:04.693 | 99.99th=[ 363] 00:18:04.693 bw ( KiB/s): min=63361, max=94720, per=6.93%, avg=80253.60, stdev=7222.80, samples=20 00:18:04.693 iops : min= 247, max= 370, avg=313.35, stdev=28.33, samples=20 00:18:04.693 lat (msec) : 100=0.41%, 250=97.53%, 500=2.06% 00:18:04.693 cpu : usr=0.11%, sys=1.29%, ctx=503, majf=0, minf=4097 00:18:04.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:04.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.694 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.694 job7: (groupid=0, jobs=1): err= 0: pid=78740: Thu Apr 25 18:13:00 2024 00:18:04.694 read: IOPS=309, BW=77.5MiB/s (81.2MB/s)(786MiB/10149msec) 00:18:04.694 slat (usec): min=15, max=139657, avg=3120.83, stdev=11978.69 00:18:04.694 clat (msec): min=46, max=352, avg=203.09, stdev=20.78 00:18:04.694 lat (msec): min=48, max=353, avg=206.21, stdev=23.64 00:18:04.694 clat percentiles (msec): 00:18:04.694 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:18:04.694 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 207], 00:18:04.694 | 70.00th=[ 213], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 232], 00:18:04.694 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 355], 99.95th=[ 355], 00:18:04.694 | 99.99th=[ 355] 00:18:04.694 bw ( KiB/s): min=66048, max=89600, per=6.81%, avg=78859.20, stdev=6131.81, samples=20 00:18:04.694 iops : min= 258, max= 350, avg=307.90, stdev=23.90, samples=20 00:18:04.694 lat (msec) : 50=0.16%, 250=98.00%, 500=1.84% 00:18:04.694 cpu : usr=0.15%, sys=1.06%, ctx=682, majf=0, minf=4097 00:18:04.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:04.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.694 issued rwts: total=3145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.694 job8: (groupid=0, jobs=1): err= 0: pid=78741: Thu Apr 25 18:13:00 2024 00:18:04.694 read: IOPS=483, BW=121MiB/s (127MB/s)(1221MiB/10100msec) 00:18:04.694 slat (usec): min=21, max=71158, avg=2042.84, stdev=7554.15 00:18:04.694 clat (msec): min=23, max=226, avg=130.07, stdev=16.25 00:18:04.694 lat (msec): min=23, max=226, avg=132.11, stdev=17.52 00:18:04.694 clat percentiles (msec): 00:18:04.694 | 1.00th=[ 96], 5.00th=[ 107], 10.00th=[ 113], 20.00th=[ 120], 00:18:04.694 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 134], 00:18:04.694 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 148], 95.00th=[ 155], 00:18:04.694 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 226], 99.95th=[ 228], 00:18:04.694 | 99.99th=[ 228] 00:18:04.694 bw ( KiB/s): min=105561, max=142051, per=10.65%, avg=123331.20, stdev=8603.53, samples=20 00:18:04.694 iops : min= 412, max= 554, avg=481.60, stdev=33.54, samples=20 00:18:04.694 lat (msec) : 50=0.14%, 100=1.76%, 250=98.10% 00:18:04.694 cpu : usr=0.24%, sys=1.71%, ctx=877, majf=0, minf=4097 00:18:04.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:04.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.694 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.694 job9: (groupid=0, jobs=1): err= 0: pid=78742: Thu Apr 25 18:13:00 2024 00:18:04.694 read: IOPS=689, BW=172MiB/s (181MB/s)(1751MiB/10156msec) 00:18:04.694 slat (usec): min=16, max=203901, avg=1423.80, stdev=8240.55 00:18:04.694 clat (msec): min=11, max=344, avg=91.20, stdev=86.58 00:18:04.694 lat (msec): min=11, max=390, avg=92.63, stdev=88.22 00:18:04.694 clat percentiles (msec): 00:18:04.694 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 26], 00:18:04.694 | 30.00th=[ 28], 40.00th=[ 30], 50.00th=[ 35], 60.00th=[ 39], 00:18:04.694 | 70.00th=[ 190], 80.00th=[ 205], 90.00th=[ 218], 95.00th=[ 226], 00:18:04.694 | 99.00th=[ 241], 99.50th=[ 264], 99.90th=[ 326], 99.95th=[ 347], 00:18:04.694 | 99.99th=[ 347] 00:18:04.694 bw ( KiB/s): min=68096, max=568605, per=15.32%, avg=177470.75, stdev=191473.43, samples=20 00:18:04.694 iops : min= 266, max= 2221, avg=693.10, stdev=747.90, samples=20 00:18:04.694 lat (msec) : 20=2.54%, 50=62.90%, 100=0.01%, 250=33.89%, 500=0.66% 00:18:04.694 cpu : usr=0.19%, sys=2.13%, ctx=1312, majf=0, minf=4097 00:18:04.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:04.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.694 issued rwts: total=7005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.694 job10: (groupid=0, jobs=1): err= 0: pid=78743: Thu Apr 25 18:13:00 2024 00:18:04.694 read: IOPS=474, BW=119MiB/s (124MB/s)(1197MiB/10096msec) 00:18:04.694 slat (usec): min=17, max=80536, avg=2088.61, stdev=7386.39 00:18:04.694 clat (msec): min=59, max=200, avg=132.67, stdev=16.66 00:18:04.694 lat (msec): min=59, max=208, avg=134.76, stdev=17.81 00:18:04.694 clat percentiles (msec): 00:18:04.694 | 1.00th=[ 88], 5.00th=[ 108], 10.00th=[ 114], 20.00th=[ 120], 00:18:04.694 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 132], 60.00th=[ 138], 00:18:04.694 | 70.00th=[ 140], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 159], 00:18:04.694 | 99.00th=[ 182], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 201], 00:18:04.694 | 99.99th=[ 201] 00:18:04.694 bw ( KiB/s): min=108761, max=131072, per=10.44%, avg=120897.45, stdev=6175.75, samples=20 00:18:04.694 iops : min= 424, max= 512, avg=472.15, stdev=24.23, samples=20 00:18:04.694 lat (msec) : 100=2.30%, 250=97.70% 00:18:04.694 cpu : usr=0.22%, sys=1.81%, ctx=751, majf=0, minf=4097 00:18:04.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:04.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:04.694 issued rwts: total=4788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.694 00:18:04.694 Run status group 0 (all jobs): 00:18:04.694 READ: bw=1131MiB/s (1186MB/s), 77.5MiB/s-172MiB/s (81.2MB/s-181MB/s), io=11.2GiB (12.0GB), run=10096-10156msec 00:18:04.694 00:18:04.694 Disk stats (read/write): 00:18:04.694 nvme0n1: ios=9879/0, merge=0/0, ticks=1239192/0, in_queue=1239192, util=97.61% 00:18:04.694 nvme10n1: ios=9319/0, merge=0/0, ticks=1241014/0, in_queue=1241014, util=98.02% 00:18:04.694 nvme1n1: ios=6539/0, merge=0/0, ticks=1238210/0, in_queue=1238210, util=98.07% 00:18:04.694 nvme2n1: ios=6520/0, merge=0/0, ticks=1236252/0, in_queue=1236252, util=98.00% 00:18:04.694 nvme3n1: ios=6504/0, merge=0/0, ticks=1240772/0, in_queue=1240772, util=98.28% 00:18:04.694 nvme4n1: ios=6372/0, merge=0/0, ticks=1240498/0, in_queue=1240498, util=98.41% 00:18:04.694 nvme5n1: ios=6278/0, merge=0/0, ticks=1236672/0, in_queue=1236672, util=98.47% 00:18:04.694 nvme6n1: ios=6163/0, merge=0/0, ticks=1234726/0, in_queue=1234726, util=98.40% 00:18:04.694 nvme7n1: ios=9685/0, merge=0/0, ticks=1242827/0, in_queue=1242827, util=98.75% 00:18:04.694 nvme8n1: ios=13883/0, merge=0/0, ticks=1226238/0, in_queue=1226238, util=98.95% 00:18:04.694 nvme9n1: ios=9448/0, merge=0/0, ticks=1240799/0, in_queue=1240799, util=98.92% 00:18:04.694 18:13:00 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:04.694 [global] 00:18:04.694 thread=1 00:18:04.694 invalidate=1 00:18:04.694 rw=randwrite 00:18:04.694 time_based=1 00:18:04.694 runtime=10 00:18:04.694 ioengine=libaio 00:18:04.694 direct=1 00:18:04.694 bs=262144 00:18:04.694 iodepth=64 00:18:04.694 norandommap=1 00:18:04.694 numjobs=1 00:18:04.694 00:18:04.694 [job0] 00:18:04.694 filename=/dev/nvme0n1 00:18:04.694 [job1] 00:18:04.694 filename=/dev/nvme10n1 00:18:04.694 [job2] 00:18:04.694 filename=/dev/nvme1n1 00:18:04.694 [job3] 00:18:04.694 filename=/dev/nvme2n1 00:18:04.694 [job4] 00:18:04.694 filename=/dev/nvme3n1 00:18:04.694 [job5] 00:18:04.694 filename=/dev/nvme4n1 00:18:04.694 [job6] 00:18:04.694 filename=/dev/nvme5n1 00:18:04.694 [job7] 00:18:04.694 filename=/dev/nvme6n1 00:18:04.694 [job8] 00:18:04.694 filename=/dev/nvme7n1 00:18:04.694 [job9] 00:18:04.694 filename=/dev/nvme8n1 00:18:04.694 [job10] 00:18:04.694 filename=/dev/nvme9n1 00:18:04.694 Could not set queue depth (nvme0n1) 00:18:04.694 Could not set queue depth (nvme10n1) 00:18:04.694 Could not set queue depth (nvme1n1) 00:18:04.694 Could not set queue depth (nvme2n1) 00:18:04.694 Could not set queue depth (nvme3n1) 00:18:04.694 Could not set queue depth (nvme4n1) 00:18:04.694 Could not set queue depth (nvme5n1) 00:18:04.694 Could not set queue depth (nvme6n1) 00:18:04.694 Could not set queue depth (nvme7n1) 00:18:04.694 Could not set queue depth (nvme8n1) 00:18:04.694 Could not set queue depth (nvme9n1) 00:18:04.694 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:04.694 fio-3.35 00:18:04.694 Starting 11 threads 00:18:14.700 00:18:14.700 job0: (groupid=0, jobs=1): err= 0: pid=78937: Thu Apr 25 18:13:11 2024 00:18:14.700 write: IOPS=1348, BW=337MiB/s (354MB/s)(3385MiB/10037msec); 0 zone resets 00:18:14.700 slat (usec): min=17, max=45919, avg=734.81, stdev=1444.05 00:18:14.700 clat (msec): min=18, max=170, avg=46.70, stdev=21.82 00:18:14.700 lat (msec): min=18, max=170, avg=47.43, stdev=22.13 00:18:14.700 clat percentiles (msec): 00:18:14.700 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:18:14.700 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:18:14.700 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 87], 95.00th=[ 94], 00:18:14.700 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 165], 00:18:14.700 | 99.99th=[ 171] 00:18:14.700 bw ( KiB/s): min=104657, max=416064, per=26.65%, avg=344836.30, stdev=113646.10, samples=20 00:18:14.700 iops : min= 408, max= 1625, avg=1346.80, stdev=443.92, samples=20 00:18:14.700 lat (msec) : 20=0.03%, 50=88.99%, 100=7.50%, 250=3.49% 00:18:14.700 cpu : usr=2.09%, sys=2.75%, ctx=17349, majf=0, minf=1 00:18:14.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.700 issued rwts: total=0,13538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.700 job1: (groupid=0, jobs=1): err= 0: pid=78938: Thu Apr 25 18:13:11 2024 00:18:14.700 write: IOPS=336, BW=84.2MiB/s (88.3MB/s)(857MiB/10173msec); 0 zone resets 00:18:14.700 slat (usec): min=20, max=35999, avg=2858.50, stdev=5373.22 00:18:14.700 clat (usec): min=1246, max=375250, avg=187031.97, stdev=43274.54 00:18:14.700 lat (usec): min=1736, max=375510, avg=189890.47, stdev=43572.60 00:18:14.700 clat percentiles (msec): 00:18:14.700 | 1.00th=[ 8], 5.00th=[ 128], 10.00th=[ 136], 20.00th=[ 165], 00:18:14.700 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 199], 60.00th=[ 205], 00:18:14.700 | 70.00th=[ 209], 80.00th=[ 215], 90.00th=[ 222], 95.00th=[ 228], 00:18:14.700 | 99.00th=[ 266], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:18:14.700 | 99.99th=[ 376] 00:18:14.700 bw ( KiB/s): min=70514, max=131584, per=6.65%, avg=86071.90, stdev=16570.13, samples=20 00:18:14.700 iops : min= 275, max= 514, avg=336.10, stdev=64.80, samples=20 00:18:14.700 lat (msec) : 2=0.09%, 4=0.26%, 10=1.17%, 20=0.58%, 50=0.47% 00:18:14.700 lat (msec) : 100=1.05%, 250=95.16%, 500=1.23% 00:18:14.700 cpu : usr=0.59%, sys=0.91%, ctx=4006, majf=0, minf=1 00:18:14.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.700 issued rwts: total=0,3427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.700 job2: (groupid=0, jobs=1): err= 0: pid=78946: Thu Apr 25 18:13:11 2024 00:18:14.700 write: IOPS=318, BW=79.7MiB/s (83.6MB/s)(811MiB/10179msec); 0 zone resets 00:18:14.700 slat (usec): min=19, max=37861, avg=3009.66, stdev=5576.43 00:18:14.700 clat (usec): min=1778, max=380318, avg=197641.19, stdev=41868.36 00:18:14.700 lat (msec): min=2, max=380, avg=200.65, stdev=42.19 00:18:14.700 clat percentiles (msec): 00:18:14.700 | 1.00th=[ 7], 5.00th=[ 153], 10.00th=[ 182], 20.00th=[ 190], 00:18:14.700 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 205], 00:18:14.700 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 230], 95.00th=[ 245], 00:18:14.700 | 99.00th=[ 279], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 380], 00:18:14.700 | 99.99th=[ 380] 00:18:14.700 bw ( KiB/s): min=65536, max=120590, per=6.29%, avg=81407.15, stdev=10452.67, samples=20 00:18:14.700 iops : min= 256, max= 471, avg=317.90, stdev=40.84, samples=20 00:18:14.700 lat (msec) : 2=0.03%, 4=0.40%, 10=1.63%, 20=0.77%, 50=0.25% 00:18:14.700 lat (msec) : 100=0.62%, 250=93.28%, 500=3.02% 00:18:14.700 cpu : usr=0.76%, sys=0.92%, ctx=2927, majf=0, minf=1 00:18:14.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.700 issued rwts: total=0,3245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.700 job3: (groupid=0, jobs=1): err= 0: pid=78952: Thu Apr 25 18:13:11 2024 00:18:14.700 write: IOPS=403, BW=101MiB/s (106MB/s)(1020MiB/10121msec); 0 zone resets 00:18:14.700 slat (usec): min=20, max=26551, avg=2447.52, stdev=4332.28 00:18:14.700 clat (msec): min=29, max=254, avg=156.25, stdev=33.55 00:18:14.700 lat (msec): min=29, max=255, avg=158.70, stdev=33.81 00:18:14.700 clat percentiles (msec): 00:18:14.700 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 138], 00:18:14.700 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:14.700 | 70.00th=[ 146], 80.00th=[ 184], 90.00th=[ 218], 95.00th=[ 232], 00:18:14.700 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:18:14.700 | 99.99th=[ 255] 00:18:14.700 bw ( KiB/s): min=67584, max=118546, per=7.94%, avg=102760.60, stdev=19233.68, samples=20 00:18:14.700 iops : min= 264, max= 463, avg=401.35, stdev=75.16, samples=20 00:18:14.700 lat (msec) : 50=0.20%, 100=0.39%, 250=98.90%, 500=0.51% 00:18:14.700 cpu : usr=0.86%, sys=1.08%, ctx=4760, majf=0, minf=1 00:18:14.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.700 issued rwts: total=0,4080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job4: (groupid=0, jobs=1): err= 0: pid=78953: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=420, BW=105MiB/s (110MB/s)(1066MiB/10124msec); 0 zone resets 00:18:14.701 slat (usec): min=18, max=22606, avg=2317.24, stdev=4120.91 00:18:14.701 clat (msec): min=18, max=254, avg=149.64, stdev=31.01 00:18:14.701 lat (msec): min=18, max=254, avg=151.95, stdev=31.24 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 78], 5.00th=[ 130], 10.00th=[ 132], 20.00th=[ 134], 00:18:14.701 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 00:18:14.701 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 209], 95.00th=[ 230], 00:18:14.701 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 253], 99.95th=[ 253], 00:18:14.701 | 99.99th=[ 255] 00:18:14.701 bw ( KiB/s): min=67584, max=126211, per=8.30%, avg=107447.10, stdev=17283.01, samples=20 00:18:14.701 iops : min= 264, max= 493, avg=419.65, stdev=67.48, samples=20 00:18:14.701 lat (msec) : 20=0.09%, 50=0.28%, 100=1.17%, 250=97.98%, 500=0.47% 00:18:14.701 cpu : usr=0.85%, sys=0.95%, ctx=6261, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,4262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job5: (groupid=0, jobs=1): err= 0: pid=78954: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=402, BW=101MiB/s (105MB/s)(1023MiB/10175msec); 0 zone resets 00:18:14.701 slat (usec): min=19, max=43788, avg=2432.84, stdev=4987.96 00:18:14.701 clat (msec): min=4, max=363, avg=156.65, stdev=71.86 00:18:14.701 lat (msec): min=4, max=363, avg=159.09, stdev=72.80 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:18:14.701 | 30.00th=[ 155], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 201], 00:18:14.701 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 218], 95.00th=[ 232], 00:18:14.701 | 99.00th=[ 253], 99.50th=[ 300], 99.90th=[ 351], 99.95th=[ 351], 00:18:14.701 | 99.99th=[ 363] 00:18:14.701 bw ( KiB/s): min=69493, max=326797, per=7.97%, avg=103102.30, stdev=69348.23, samples=20 00:18:14.701 iops : min= 271, max= 1276, avg=402.60, stdev=270.84, samples=20 00:18:14.701 lat (msec) : 10=0.17%, 20=0.34%, 50=21.12%, 100=6.97%, 250=69.96% 00:18:14.701 lat (msec) : 500=1.44% 00:18:14.701 cpu : usr=0.85%, sys=1.08%, ctx=3475, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,4091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job6: (groupid=0, jobs=1): err= 0: pid=78955: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=373, BW=93.3MiB/s (97.8MB/s)(949MiB/10177msec); 0 zone resets 00:18:14.701 slat (usec): min=20, max=38878, avg=2609.75, stdev=5071.37 00:18:14.701 clat (msec): min=15, max=369, avg=168.80, stdev=54.90 00:18:14.701 lat (msec): min=15, max=369, avg=171.41, stdev=55.48 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 92], 20.00th=[ 94], 00:18:14.701 | 30.00th=[ 133], 40.00th=[ 180], 50.00th=[ 194], 60.00th=[ 201], 00:18:14.701 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 220], 95.00th=[ 234], 00:18:14.701 | 99.00th=[ 266], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 372], 00:18:14.701 | 99.99th=[ 372] 00:18:14.701 bw ( KiB/s): min=69632, max=176128, per=7.39%, avg=95553.60, stdev=33917.04, samples=20 00:18:14.701 iops : min= 272, max= 688, avg=373.15, stdev=132.53, samples=20 00:18:14.701 lat (msec) : 20=0.11%, 50=0.21%, 100=25.05%, 250=71.40%, 500=3.24% 00:18:14.701 cpu : usr=0.76%, sys=1.10%, ctx=3834, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,3797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job7: (groupid=0, jobs=1): err= 0: pid=78956: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=431, BW=108MiB/s (113MB/s)(1092MiB/10125msec); 0 zone resets 00:18:14.701 slat (usec): min=19, max=41355, avg=2238.29, stdev=3977.61 00:18:14.701 clat (msec): min=6, max=258, avg=146.07, stdev=19.42 00:18:14.701 lat (msec): min=6, max=258, avg=148.31, stdev=19.29 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 118], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 00:18:14.701 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:14.701 | 70.00th=[ 144], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 182], 00:18:14.701 | 99.00th=[ 199], 99.50th=[ 215], 99.90th=[ 249], 99.95th=[ 251], 00:18:14.701 | 99.99th=[ 259] 00:18:14.701 bw ( KiB/s): min=88576, max=118784, per=8.51%, avg=110116.35, stdev=9313.71, samples=20 00:18:14.701 iops : min= 346, max= 464, avg=430.10, stdev=36.40, samples=20 00:18:14.701 lat (msec) : 10=0.07%, 20=0.09%, 50=0.37%, 100=0.18%, 250=99.24% 00:18:14.701 lat (msec) : 500=0.05% 00:18:14.701 cpu : usr=0.90%, sys=1.16%, ctx=4898, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,4367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job8: (groupid=0, jobs=1): err= 0: pid=78957: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=403, BW=101MiB/s (106MB/s)(1021MiB/10124msec); 0 zone resets 00:18:14.701 slat (usec): min=20, max=17657, avg=2445.17, stdev=4299.32 00:18:14.701 clat (msec): min=17, max=256, avg=156.10, stdev=33.11 00:18:14.701 lat (msec): min=17, max=256, avg=158.54, stdev=33.35 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 138], 00:18:14.701 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:14.701 | 70.00th=[ 146], 80.00th=[ 184], 90.00th=[ 218], 95.00th=[ 230], 00:18:14.701 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 253], 99.95th=[ 253], 00:18:14.701 | 99.99th=[ 257] 00:18:14.701 bw ( KiB/s): min=67449, max=118546, per=7.95%, avg=102890.20, stdev=19039.53, samples=20 00:18:14.701 iops : min= 263, max= 463, avg=401.80, stdev=74.51, samples=20 00:18:14.701 lat (msec) : 20=0.02%, 50=0.20%, 100=0.49%, 250=98.87%, 500=0.42% 00:18:14.701 cpu : usr=0.92%, sys=0.83%, ctx=6337, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,4085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job9: (groupid=0, jobs=1): err= 0: pid=78958: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=306, BW=76.5MiB/s (80.2MB/s)(779MiB/10178msec); 0 zone resets 00:18:14.701 slat (usec): min=18, max=39095, avg=3135.44, stdev=5738.42 00:18:14.701 clat (msec): min=25, max=379, avg=205.84, stdev=26.27 00:18:14.701 lat (msec): min=25, max=379, avg=208.98, stdev=26.13 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 113], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 194], 00:18:14.701 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 209], 00:18:14.701 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 232], 95.00th=[ 243], 00:18:14.701 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 380], 00:18:14.701 | 99.99th=[ 380] 00:18:14.701 bw ( KiB/s): min=65536, max=84480, per=6.03%, avg=78069.55, stdev=4882.52, samples=20 00:18:14.701 iops : min= 256, max= 330, avg=304.80, stdev=19.13, samples=20 00:18:14.701 lat (msec) : 50=0.39%, 100=0.51%, 250=97.27%, 500=1.83% 00:18:14.701 cpu : usr=0.62%, sys=1.06%, ctx=3024, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,3115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 job10: (groupid=0, jobs=1): err= 0: pid=78959: Thu Apr 25 18:13:11 2024 00:18:14.701 write: IOPS=339, BW=84.8MiB/s (88.9MB/s)(864MiB/10182msec); 0 zone resets 00:18:14.701 slat (usec): min=21, max=31849, avg=2891.90, stdev=5181.77 00:18:14.701 clat (msec): min=9, max=381, avg=185.60, stdev=34.23 00:18:14.701 lat (msec): min=9, max=381, avg=188.49, stdev=34.35 00:18:14.701 clat percentiles (msec): 00:18:14.701 | 1.00th=[ 71], 5.00th=[ 131], 10.00th=[ 138], 20.00th=[ 163], 00:18:14.701 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 197], 00:18:14.701 | 70.00th=[ 201], 80.00th=[ 207], 90.00th=[ 213], 95.00th=[ 226], 00:18:14.701 | 99.00th=[ 264], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 380], 00:18:14.701 | 99.99th=[ 380] 00:18:14.701 bw ( KiB/s): min=68096, max=120320, per=6.70%, avg=86746.50, stdev=12847.01, samples=20 00:18:14.701 iops : min= 266, max= 470, avg=338.75, stdev=50.19, samples=20 00:18:14.701 lat (msec) : 10=0.17%, 20=0.03%, 50=0.29%, 100=0.98%, 250=96.90% 00:18:14.701 lat (msec) : 500=1.62% 00:18:14.701 cpu : usr=0.69%, sys=0.97%, ctx=3417, majf=0, minf=1 00:18:14.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:14.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:14.701 issued rwts: total=0,3454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:14.701 00:18:14.701 Run status group 0 (all jobs): 00:18:14.701 WRITE: bw=1264MiB/s (1325MB/s), 76.5MiB/s-337MiB/s (80.2MB/s-354MB/s), io=12.6GiB (13.5GB), run=10037-10182msec 00:18:14.701 00:18:14.701 Disk stats (read/write): 00:18:14.701 nvme0n1: ios=50/26890, merge=0/0, ticks=44/1218567, in_queue=1218611, util=97.81% 00:18:14.701 nvme10n1: ios=49/6723, merge=0/0, ticks=51/1208770, in_queue=1208821, util=97.96% 00:18:14.702 nvme1n1: ios=49/6367, merge=0/0, ticks=40/1210699, in_queue=1210739, util=98.19% 00:18:14.702 nvme2n1: ios=20/8014, merge=0/0, ticks=35/1212358, in_queue=1212393, util=98.00% 00:18:14.702 nvme3n1: ios=28/8380, merge=0/0, ticks=25/1213627, in_queue=1213652, util=98.14% 00:18:14.702 nvme4n1: ios=0/8041, merge=0/0, ticks=0/1207577, in_queue=1207577, util=98.18% 00:18:14.702 nvme5n1: ios=0/7458, merge=0/0, ticks=0/1208162, in_queue=1208162, util=98.32% 00:18:14.702 nvme6n1: ios=0/8598, merge=0/0, ticks=0/1214228, in_queue=1214228, util=98.44% 00:18:14.702 nvme7n1: ios=0/8030, merge=0/0, ticks=0/1213716, in_queue=1213716, util=98.65% 00:18:14.702 nvme8n1: ios=0/6098, merge=0/0, ticks=0/1209761, in_queue=1209761, util=98.79% 00:18:14.702 nvme9n1: ios=0/6777, merge=0/0, ticks=0/1208507, in_queue=1208507, util=98.95% 00:18:14.702 18:13:11 -- target/multiconnection.sh@36 -- # sync 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.702 18:13:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:14.702 18:13:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:14.702 18:13:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.702 18:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:14.702 18:13:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:14.702 18:13:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:14.702 18:13:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:14.702 18:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:14.702 18:13:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:14.702 18:13:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:14.702 18:13:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:14.702 18:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:14.702 18:13:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:14.702 18:13:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:14.702 18:13:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:14.702 18:13:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:14.702 18:13:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:14.702 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:14.702 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:14.702 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:14.702 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:14.702 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:14.702 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:14.702 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:14.702 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:14.702 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.702 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:14.702 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.702 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.702 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.702 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.702 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:14.702 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:14.702 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:14.702 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.702 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:14.962 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:14.962 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.962 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.962 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:14.962 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.962 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.962 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.962 18:13:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.962 18:13:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:14.962 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:14.962 18:13:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:14.962 18:13:12 -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.962 18:13:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:14.962 18:13:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:14.962 18:13:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:14.962 18:13:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:14.962 18:13:12 -- common/autotest_common.sh@1210 -- # return 0 00:18:14.962 18:13:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:14.962 18:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.962 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.962 18:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.962 18:13:12 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:14.962 18:13:12 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:14.962 18:13:12 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:14.962 18:13:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:14.962 18:13:12 -- nvmf/common.sh@116 -- # sync 00:18:14.962 18:13:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:14.962 18:13:12 -- nvmf/common.sh@119 -- # set +e 00:18:14.962 18:13:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:14.962 18:13:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:14.962 rmmod nvme_tcp 00:18:14.962 rmmod nvme_fabrics 00:18:14.962 rmmod nvme_keyring 00:18:14.962 18:13:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:14.962 18:13:12 -- nvmf/common.sh@123 -- # set -e 00:18:14.962 18:13:12 -- nvmf/common.sh@124 -- # return 0 00:18:14.962 18:13:12 -- nvmf/common.sh@477 -- # '[' -n 78251 ']' 00:18:14.962 18:13:12 -- nvmf/common.sh@478 -- # killprocess 78251 00:18:14.962 18:13:12 -- common/autotest_common.sh@926 -- # '[' -z 78251 ']' 00:18:14.962 18:13:12 -- common/autotest_common.sh@930 -- # kill -0 78251 00:18:14.962 18:13:12 -- common/autotest_common.sh@931 -- # uname 00:18:14.962 18:13:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:14.962 18:13:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78251 00:18:14.962 killing process with pid 78251 00:18:14.962 18:13:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:14.962 18:13:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:14.962 18:13:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78251' 00:18:14.962 18:13:12 -- common/autotest_common.sh@945 -- # kill 78251 00:18:14.962 18:13:12 -- common/autotest_common.sh@950 -- # wait 78251 00:18:15.528 18:13:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.528 18:13:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.528 18:13:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.528 18:13:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.528 18:13:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.528 18:13:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.528 18:13:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.528 18:13:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.528 18:13:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:15.528 00:18:15.528 real 0m49.867s 00:18:15.528 user 2m53.502s 00:18:15.528 sys 0m20.684s 00:18:15.528 18:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.528 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:18:15.528 ************************************ 00:18:15.528 END TEST nvmf_multiconnection 00:18:15.528 ************************************ 00:18:15.528 18:13:13 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:15.528 18:13:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:15.528 18:13:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.528 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:18:15.528 ************************************ 00:18:15.528 START TEST nvmf_initiator_timeout 00:18:15.528 ************************************ 00:18:15.528 18:13:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:15.786 * Looking for test storage... 00:18:15.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.786 18:13:13 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.786 18:13:13 -- nvmf/common.sh@7 -- # uname -s 00:18:15.786 18:13:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.786 18:13:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.786 18:13:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.786 18:13:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.786 18:13:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.786 18:13:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.786 18:13:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.786 18:13:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.786 18:13:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.786 18:13:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:18:15.786 18:13:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:18:15.786 18:13:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.786 18:13:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.786 18:13:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.786 18:13:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.786 18:13:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.786 18:13:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.786 18:13:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.786 18:13:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.786 18:13:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.786 18:13:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.786 18:13:13 -- paths/export.sh@5 -- # export PATH 00:18:15.786 18:13:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.786 18:13:13 -- nvmf/common.sh@46 -- # : 0 00:18:15.786 18:13:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.786 18:13:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.786 18:13:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.786 18:13:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.786 18:13:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.786 18:13:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.786 18:13:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.786 18:13:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.786 18:13:13 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.786 18:13:13 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.786 18:13:13 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:15.786 18:13:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:15.786 18:13:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.786 18:13:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.786 18:13:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.786 18:13:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.786 18:13:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.786 18:13:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.786 18:13:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.786 18:13:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:15.786 18:13:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:15.786 18:13:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.786 18:13:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.786 18:13:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.786 18:13:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:15.786 18:13:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.786 18:13:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.786 18:13:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.786 18:13:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.786 18:13:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.786 18:13:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.786 18:13:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.786 18:13:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.786 18:13:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:15.786 18:13:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:15.786 Cannot find device "nvmf_tgt_br" 00:18:15.786 18:13:13 -- nvmf/common.sh@154 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.786 Cannot find device "nvmf_tgt_br2" 00:18:15.786 18:13:13 -- nvmf/common.sh@155 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:15.786 18:13:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:15.786 Cannot find device "nvmf_tgt_br" 00:18:15.786 18:13:13 -- nvmf/common.sh@157 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:15.786 Cannot find device "nvmf_tgt_br2" 00:18:15.786 18:13:13 -- nvmf/common.sh@158 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:15.786 18:13:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:15.786 18:13:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.786 18:13:13 -- nvmf/common.sh@161 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.786 18:13:13 -- nvmf/common.sh@162 -- # true 00:18:15.786 18:13:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.786 18:13:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.786 18:13:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.786 18:13:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.786 18:13:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.786 18:13:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.786 18:13:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.786 18:13:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:15.786 18:13:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:15.786 18:13:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.045 18:13:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.045 18:13:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.045 18:13:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.045 18:13:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.045 18:13:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.045 18:13:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.045 18:13:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.045 18:13:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.045 18:13:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.045 18:13:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.045 18:13:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.045 18:13:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.045 18:13:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.045 18:13:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:16.045 00:18:16.045 --- 10.0.0.2 ping statistics --- 00:18:16.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.045 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:16.045 18:13:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:16.045 00:18:16.045 --- 10.0.0.3 ping statistics --- 00:18:16.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.045 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:16.045 18:13:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:16.045 00:18:16.045 --- 10.0.0.1 ping statistics --- 00:18:16.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.045 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:16.045 18:13:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.045 18:13:13 -- nvmf/common.sh@421 -- # return 0 00:18:16.045 18:13:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.045 18:13:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.045 18:13:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.045 18:13:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.045 18:13:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.045 18:13:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.045 18:13:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.045 18:13:13 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:16.045 18:13:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.045 18:13:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:16.045 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:18:16.045 18:13:13 -- nvmf/common.sh@469 -- # nvmfpid=79324 00:18:16.045 18:13:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.045 18:13:13 -- nvmf/common.sh@470 -- # waitforlisten 79324 00:18:16.045 18:13:13 -- common/autotest_common.sh@819 -- # '[' -z 79324 ']' 00:18:16.045 18:13:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.045 18:13:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.045 18:13:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.045 18:13:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.045 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:18:16.045 [2024-04-25 18:13:13.901522] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:16.045 [2024-04-25 18:13:13.901621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.304 [2024-04-25 18:13:14.036209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.304 [2024-04-25 18:13:14.134888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.304 [2024-04-25 18:13:14.135020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.304 [2024-04-25 18:13:14.135032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.304 [2024-04-25 18:13:14.135040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.304 [2024-04-25 18:13:14.135217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.304 [2024-04-25 18:13:14.135916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.304 [2024-04-25 18:13:14.136085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.304 [2024-04-25 18:13:14.136090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.240 18:13:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.240 18:13:14 -- common/autotest_common.sh@852 -- # return 0 00:18:17.240 18:13:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:17.240 18:13:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:17.240 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 18:13:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.240 18:13:14 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:17.240 18:13:14 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.240 18:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 Malloc0 00:18:17.240 18:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:14 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:17.240 18:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 Delay0 00:18:17.240 18:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:14 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.240 18:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 [2024-04-25 18:13:15.001004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.240 18:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:15 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:17.240 18:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 18:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:15 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:17.240 18:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 18:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:15 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.240 18:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:17.240 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:17.240 [2024-04-25 18:13:15.029228] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.240 18:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:17.240 18:13:15 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.499 18:13:15 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.499 18:13:15 -- common/autotest_common.sh@1177 -- # local i=0 00:18:17.499 18:13:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.499 18:13:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:17.499 18:13:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:19.435 18:13:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:19.435 18:13:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:19.435 18:13:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.435 18:13:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:19.435 18:13:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.435 18:13:17 -- common/autotest_common.sh@1187 -- # return 0 00:18:19.435 18:13:17 -- target/initiator_timeout.sh@35 -- # fio_pid=79406 00:18:19.435 18:13:17 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:19.435 18:13:17 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:19.435 [global] 00:18:19.435 thread=1 00:18:19.435 invalidate=1 00:18:19.435 rw=write 00:18:19.435 time_based=1 00:18:19.435 runtime=60 00:18:19.435 ioengine=libaio 00:18:19.435 direct=1 00:18:19.435 bs=4096 00:18:19.435 iodepth=1 00:18:19.435 norandommap=0 00:18:19.435 numjobs=1 00:18:19.435 00:18:19.435 verify_dump=1 00:18:19.435 verify_backlog=512 00:18:19.435 verify_state_save=0 00:18:19.435 do_verify=1 00:18:19.435 verify=crc32c-intel 00:18:19.435 [job0] 00:18:19.435 filename=/dev/nvme0n1 00:18:19.435 Could not set queue depth (nvme0n1) 00:18:19.694 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.694 fio-3.35 00:18:19.694 Starting 1 thread 00:18:22.991 18:13:20 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:22.991 18:13:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.991 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.991 true 00:18:22.991 18:13:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.991 18:13:20 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:22.991 18:13:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.991 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.991 true 00:18:22.991 18:13:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.991 18:13:20 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:22.991 18:13:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.991 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.991 true 00:18:22.991 18:13:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.991 18:13:20 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:22.991 18:13:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:22.991 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.991 true 00:18:22.991 18:13:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:22.991 18:13:20 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:25.531 18:13:23 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:25.532 18:13:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.532 18:13:23 -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 true 00:18:25.532 18:13:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.532 18:13:23 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:25.532 18:13:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.532 18:13:23 -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 true 00:18:25.532 18:13:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.532 18:13:23 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:25.532 18:13:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.532 18:13:23 -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 true 00:18:25.532 18:13:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.532 18:13:23 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:25.532 18:13:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.532 18:13:23 -- common/autotest_common.sh@10 -- # set +x 00:18:25.532 true 00:18:25.532 18:13:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.532 18:13:23 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:25.532 18:13:23 -- target/initiator_timeout.sh@54 -- # wait 79406 00:19:21.767 00:19:21.767 job0: (groupid=0, jobs=1): err= 0: pid=79427: Thu Apr 25 18:14:17 2024 00:19:21.767 read: IOPS=691, BW=2765KiB/s (2831kB/s)(162MiB/60000msec) 00:19:21.767 slat (usec): min=12, max=11799, avg=16.24, stdev=69.39 00:19:21.767 clat (usec): min=162, max=40840k, avg=1220.59, stdev=200539.94 00:19:21.767 lat (usec): min=179, max=40840k, avg=1236.84, stdev=200539.93 00:19:21.767 clat percentiles (usec): 00:19:21.767 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:19:21.767 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:19:21.767 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:19:21.767 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 388], 99.95th=[ 545], 00:19:21.767 | 99.99th=[ 1106] 00:19:21.767 write: IOPS=697, BW=2788KiB/s (2855kB/s)(163MiB/60000msec); 0 zone resets 00:19:21.767 slat (usec): min=18, max=837, avg=22.87, stdev= 7.53 00:19:21.767 clat (usec): min=128, max=7378, avg=181.96, stdev=42.56 00:19:21.767 lat (usec): min=152, max=7401, avg=204.82, stdev=43.59 00:19:21.767 clat percentiles (usec): 00:19:21.767 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:19:21.767 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:19:21.767 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:19:21.767 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 297], 99.95th=[ 375], 00:19:21.767 | 99.99th=[ 791] 00:19:21.767 bw ( KiB/s): min= 4096, max= 9984, per=100.00%, avg=8610.50, stdev=942.33, samples=38 00:19:21.767 iops : min= 1024, max= 2496, avg=2152.61, stdev=235.59, samples=38 00:19:21.767 lat (usec) : 250=89.03%, 500=10.92%, 750=0.02%, 1000=0.01% 00:19:21.767 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:19:21.767 cpu : usr=0.50%, sys=1.87%, ctx=83307, majf=0, minf=2 00:19:21.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.767 issued rwts: total=41472,41826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.767 00:19:21.767 Run status group 0 (all jobs): 00:19:21.767 READ: bw=2765KiB/s (2831kB/s), 2765KiB/s-2765KiB/s (2831kB/s-2831kB/s), io=162MiB (170MB), run=60000-60000msec 00:19:21.767 WRITE: bw=2788KiB/s (2855kB/s), 2788KiB/s-2788KiB/s (2855kB/s-2855kB/s), io=163MiB (171MB), run=60000-60000msec 00:19:21.767 00:19:21.767 Disk stats (read/write): 00:19:21.767 nvme0n1: ios=41667/41472, merge=0/0, ticks=10071/7861, in_queue=17932, util=99.59% 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:21.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:21.767 18:14:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:21.767 18:14:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:21.767 18:14:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:21.767 18:14:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:21.767 18:14:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:21.767 18:14:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:21.767 nvmf hotplug test: fio successful as expected 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.767 18:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.767 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:21.767 18:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:21.767 18:14:17 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:21.767 18:14:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:21.767 18:14:17 -- nvmf/common.sh@116 -- # sync 00:19:21.767 18:14:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:21.767 18:14:17 -- nvmf/common.sh@119 -- # set +e 00:19:21.767 18:14:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:21.767 18:14:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:21.767 rmmod nvme_tcp 00:19:21.767 rmmod nvme_fabrics 00:19:21.767 rmmod nvme_keyring 00:19:21.767 18:14:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.767 18:14:17 -- nvmf/common.sh@123 -- # set -e 00:19:21.767 18:14:17 -- nvmf/common.sh@124 -- # return 0 00:19:21.767 18:14:17 -- nvmf/common.sh@477 -- # '[' -n 79324 ']' 00:19:21.767 18:14:17 -- nvmf/common.sh@478 -- # killprocess 79324 00:19:21.767 18:14:17 -- common/autotest_common.sh@926 -- # '[' -z 79324 ']' 00:19:21.767 18:14:17 -- common/autotest_common.sh@930 -- # kill -0 79324 00:19:21.767 18:14:17 -- common/autotest_common.sh@931 -- # uname 00:19:21.767 18:14:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:21.767 18:14:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79324 00:19:21.767 18:14:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:21.767 18:14:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:21.767 killing process with pid 79324 00:19:21.767 18:14:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79324' 00:19:21.767 18:14:17 -- common/autotest_common.sh@945 -- # kill 79324 00:19:21.767 18:14:17 -- common/autotest_common.sh@950 -- # wait 79324 00:19:21.767 18:14:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.767 18:14:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:21.767 18:14:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:21.767 18:14:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.767 18:14:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:21.767 18:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.767 18:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.767 18:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.767 18:14:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:21.767 00:19:21.767 real 1m4.548s 00:19:21.767 user 4m5.630s 00:19:21.767 sys 0m8.842s 00:19:21.767 18:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.767 ************************************ 00:19:21.767 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:21.767 END TEST nvmf_initiator_timeout 00:19:21.767 ************************************ 00:19:21.767 18:14:18 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:21.767 18:14:18 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:19:21.767 18:14:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:21.767 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.767 18:14:18 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:19:21.767 18:14:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:21.767 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.767 18:14:18 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:19:21.767 18:14:18 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:21.767 18:14:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:21.767 18:14:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:21.767 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.767 ************************************ 00:19:21.767 START TEST nvmf_multicontroller 00:19:21.767 ************************************ 00:19:21.767 18:14:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:21.767 * Looking for test storage... 00:19:21.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:21.767 18:14:18 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.767 18:14:18 -- nvmf/common.sh@7 -- # uname -s 00:19:21.767 18:14:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.767 18:14:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.767 18:14:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.767 18:14:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.767 18:14:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.767 18:14:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.767 18:14:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.767 18:14:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.767 18:14:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.767 18:14:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.767 18:14:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:21.767 18:14:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:21.767 18:14:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.767 18:14:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.767 18:14:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.767 18:14:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.767 18:14:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.767 18:14:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.767 18:14:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.767 18:14:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.767 18:14:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.767 18:14:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.768 18:14:18 -- paths/export.sh@5 -- # export PATH 00:19:21.768 18:14:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.768 18:14:18 -- nvmf/common.sh@46 -- # : 0 00:19:21.768 18:14:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:21.768 18:14:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:21.768 18:14:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.768 18:14:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.768 18:14:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:21.768 18:14:18 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.768 18:14:18 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.768 18:14:18 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:21.768 18:14:18 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:21.768 18:14:18 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.768 18:14:18 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:21.768 18:14:18 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:21.768 18:14:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.768 18:14:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:21.768 18:14:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:21.768 18:14:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:21.768 18:14:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.768 18:14:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.768 18:14:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.768 18:14:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:21.768 18:14:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.768 18:14:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.768 18:14:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:21.768 18:14:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:21.768 18:14:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.768 18:14:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.768 18:14:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.768 18:14:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.768 18:14:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.768 18:14:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.768 18:14:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.768 18:14:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.768 18:14:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:21.768 18:14:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:21.768 Cannot find device "nvmf_tgt_br" 00:19:21.768 18:14:18 -- nvmf/common.sh@154 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.768 Cannot find device "nvmf_tgt_br2" 00:19:21.768 18:14:18 -- nvmf/common.sh@155 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:21.768 18:14:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:21.768 Cannot find device "nvmf_tgt_br" 00:19:21.768 18:14:18 -- nvmf/common.sh@157 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:21.768 Cannot find device "nvmf_tgt_br2" 00:19:21.768 18:14:18 -- nvmf/common.sh@158 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:21.768 18:14:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:21.768 18:14:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.768 18:14:18 -- nvmf/common.sh@161 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.768 18:14:18 -- nvmf/common.sh@162 -- # true 00:19:21.768 18:14:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.768 18:14:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.768 18:14:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.768 18:14:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.768 18:14:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.768 18:14:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.768 18:14:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.768 18:14:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:21.768 18:14:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:21.768 18:14:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:21.768 18:14:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:21.768 18:14:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:21.768 18:14:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:21.768 18:14:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.768 18:14:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.768 18:14:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.768 18:14:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:21.768 18:14:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:21.768 18:14:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.768 18:14:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.768 18:14:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.768 18:14:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.768 18:14:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.768 18:14:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:21.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:21.768 00:19:21.768 --- 10.0.0.2 ping statistics --- 00:19:21.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.768 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:21.768 18:14:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:21.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:21.768 00:19:21.768 --- 10.0.0.3 ping statistics --- 00:19:21.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.768 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:21.768 18:14:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:21.768 00:19:21.768 --- 10.0.0.1 ping statistics --- 00:19:21.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.768 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:21.768 18:14:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.768 18:14:18 -- nvmf/common.sh@421 -- # return 0 00:19:21.768 18:14:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.768 18:14:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:21.768 18:14:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.768 18:14:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:21.768 18:14:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:21.768 18:14:18 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:21.768 18:14:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:21.768 18:14:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:21.768 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.768 18:14:18 -- nvmf/common.sh@469 -- # nvmfpid=80255 00:19:21.768 18:14:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:21.768 18:14:18 -- nvmf/common.sh@470 -- # waitforlisten 80255 00:19:21.768 18:14:18 -- common/autotest_common.sh@819 -- # '[' -z 80255 ']' 00:19:21.768 18:14:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.768 18:14:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:21.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.768 18:14:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.768 18:14:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:21.768 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.768 [2024-04-25 18:14:18.591021] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:21.768 [2024-04-25 18:14:18.591115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.768 [2024-04-25 18:14:18.733066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.768 [2024-04-25 18:14:18.835541] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:21.768 [2024-04-25 18:14:18.835707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.768 [2024-04-25 18:14:18.835722] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.768 [2024-04-25 18:14:18.835734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.768 [2024-04-25 18:14:18.835893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.768 [2024-04-25 18:14:18.835997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.768 [2024-04-25 18:14:18.836006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.768 18:14:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.769 18:14:19 -- common/autotest_common.sh@852 -- # return 0 00:19:21.769 18:14:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:21.769 18:14:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 18:14:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.769 18:14:19 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 [2024-04-25 18:14:19.629103] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.769 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.769 18:14:19 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 Malloc0 00:19:21.769 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.769 18:14:19 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.769 18:14:19 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.769 18:14:19 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 [2024-04-25 18:14:19.688690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.769 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.769 18:14:19 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:21.769 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.769 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 [2024-04-25 18:14:19.696586] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:22.028 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.028 Malloc1 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:22.028 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:22.028 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:22.028 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:22.028 18:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.028 18:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.028 18:14:19 -- host/multicontroller.sh@44 -- # bdevperf_pid=80307 00:19:22.028 18:14:19 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:22.028 18:14:19 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.028 18:14:19 -- host/multicontroller.sh@47 -- # waitforlisten 80307 /var/tmp/bdevperf.sock 00:19:22.028 18:14:19 -- common/autotest_common.sh@819 -- # '[' -z 80307 ']' 00:19:22.028 18:14:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.028 18:14:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:22.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.028 18:14:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.028 18:14:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:22.028 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 18:14:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:22.963 18:14:20 -- common/autotest_common.sh@852 -- # return 0 00:19:22.963 18:14:20 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:22.963 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.963 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 NVMe0n1 00:19:22.963 18:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.963 18:14:20 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.963 18:14:20 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:22.964 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.964 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:22.964 18:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.964 1 00:19:22.964 18:14:20 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:22.964 18:14:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:22.964 18:14:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:22.964 18:14:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:22.964 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:22.964 18:14:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:22.964 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:22.964 18:14:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:22.964 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.964 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 2024/04/25 18:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:23.224 request: 00:19:23.224 { 00:19:23.224 "method": "bdev_nvme_attach_controller", 00:19:23.224 "params": { 00:19:23.224 "name": "NVMe0", 00:19:23.224 "trtype": "tcp", 00:19:23.224 "traddr": "10.0.0.2", 00:19:23.224 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:23.224 "hostaddr": "10.0.0.2", 00:19:23.224 "hostsvcid": "60000", 00:19:23.224 "adrfam": "ipv4", 00:19:23.224 "trsvcid": "4420", 00:19:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:23.224 } 00:19:23.224 } 00:19:23.224 Got JSON-RPC error response 00:19:23.224 GoRPCClient: error on JSON-RPC call 00:19:23.224 18:14:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # es=1 00:19:23.224 18:14:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:23.224 18:14:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:23.224 18:14:20 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:23.224 18:14:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:23.224 18:14:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:23.224 18:14:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:23.224 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 2024/04/25 18:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:23.224 request: 00:19:23.224 { 00:19:23.224 "method": "bdev_nvme_attach_controller", 00:19:23.224 "params": { 00:19:23.224 "name": "NVMe0", 00:19:23.224 "trtype": "tcp", 00:19:23.224 "traddr": "10.0.0.2", 00:19:23.224 "hostaddr": "10.0.0.2", 00:19:23.224 "hostsvcid": "60000", 00:19:23.224 "adrfam": "ipv4", 00:19:23.224 "trsvcid": "4420", 00:19:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:23.224 } 00:19:23.224 } 00:19:23.224 Got JSON-RPC error response 00:19:23.224 GoRPCClient: error on JSON-RPC call 00:19:23.224 18:14:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # es=1 00:19:23.224 18:14:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:23.224 18:14:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:23.224 18:14:20 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:23.224 18:14:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 2024/04/25 18:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:23.224 request: 00:19:23.224 { 00:19:23.224 "method": "bdev_nvme_attach_controller", 00:19:23.224 "params": { 00:19:23.224 "name": "NVMe0", 00:19:23.224 "trtype": "tcp", 00:19:23.224 "traddr": "10.0.0.2", 00:19:23.224 "hostaddr": "10.0.0.2", 00:19:23.224 "hostsvcid": "60000", 00:19:23.224 "adrfam": "ipv4", 00:19:23.224 "trsvcid": "4420", 00:19:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.224 "multipath": "disable" 00:19:23.224 } 00:19:23.224 } 00:19:23.224 Got JSON-RPC error response 00:19:23.224 GoRPCClient: error on JSON-RPC call 00:19:23.224 18:14:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # es=1 00:19:23.224 18:14:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:23.224 18:14:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:23.224 18:14:20 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:23.224 18:14:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:23.224 18:14:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:23.224 18:14:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:23.224 18:14:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:23.224 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 2024/04/25 18:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:23.224 request: 00:19:23.224 { 00:19:23.224 "method": "bdev_nvme_attach_controller", 00:19:23.224 "params": { 00:19:23.224 "name": "NVMe0", 00:19:23.224 "trtype": "tcp", 00:19:23.224 "traddr": "10.0.0.2", 00:19:23.224 "hostaddr": "10.0.0.2", 00:19:23.224 "hostsvcid": "60000", 00:19:23.224 "adrfam": "ipv4", 00:19:23.224 "trsvcid": "4420", 00:19:23.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.224 "multipath": "failover" 00:19:23.224 } 00:19:23.224 } 00:19:23.224 Got JSON-RPC error response 00:19:23.224 GoRPCClient: error on JSON-RPC call 00:19:23.224 18:14:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@643 -- # es=1 00:19:23.224 18:14:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:23.224 18:14:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:23.224 18:14:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:23.224 18:14:20 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.224 18:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.224 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.224 00:19:23.224 18:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.225 18:14:21 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.225 18:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.225 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.225 18:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.225 18:14:21 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:23.225 18:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.225 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.225 00:19:23.225 18:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.225 18:14:21 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.225 18:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.225 18:14:21 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:23.225 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.225 18:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.225 18:14:21 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:23.225 18:14:21 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.598 0 00:19:24.598 18:14:22 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:24.598 18:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.598 18:14:22 -- common/autotest_common.sh@10 -- # set +x 00:19:24.598 18:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.598 18:14:22 -- host/multicontroller.sh@100 -- # killprocess 80307 00:19:24.598 18:14:22 -- common/autotest_common.sh@926 -- # '[' -z 80307 ']' 00:19:24.598 18:14:22 -- common/autotest_common.sh@930 -- # kill -0 80307 00:19:24.598 18:14:22 -- common/autotest_common.sh@931 -- # uname 00:19:24.598 18:14:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:24.598 18:14:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80307 00:19:24.598 18:14:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:24.598 18:14:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:24.598 killing process with pid 80307 00:19:24.598 18:14:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80307' 00:19:24.598 18:14:22 -- common/autotest_common.sh@945 -- # kill 80307 00:19:24.598 18:14:22 -- common/autotest_common.sh@950 -- # wait 80307 00:19:24.857 18:14:22 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.857 18:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.857 18:14:22 -- common/autotest_common.sh@10 -- # set +x 00:19:24.857 18:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.857 18:14:22 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:24.857 18:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.857 18:14:22 -- common/autotest_common.sh@10 -- # set +x 00:19:24.857 18:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.857 18:14:22 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:24.857 18:14:22 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:24.857 18:14:22 -- common/autotest_common.sh@1597 -- # read -r file 00:19:24.857 18:14:22 -- common/autotest_common.sh@1596 -- # sort -u 00:19:24.857 18:14:22 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:24.857 18:14:22 -- common/autotest_common.sh@1598 -- # cat 00:19:24.857 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:24.857 [2024-04-25 18:14:19.817920] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:24.857 [2024-04-25 18:14:19.818028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80307 ] 00:19:24.857 [2024-04-25 18:14:19.956740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.857 [2024-04-25 18:14:20.064526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.857 [2024-04-25 18:14:21.093757] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 3db83ead-9d29-4a7a-b9e2-e750aef3c7cf already exists 00:19:24.857 [2024-04-25 18:14:21.093806] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:3db83ead-9d29-4a7a-b9e2-e750aef3c7cf alias for bdev NVMe1n1 00:19:24.857 [2024-04-25 18:14:21.093839] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:24.857 Running I/O for 1 seconds... 00:19:24.857 00:19:24.857 Latency(us) 00:19:24.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.857 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:24.857 NVMe0n1 : 1.00 22552.67 88.10 0.00 0.00 5667.74 2383.13 12094.37 00:19:24.857 =================================================================================================================== 00:19:24.857 Total : 22552.67 88.10 0.00 0.00 5667.74 2383.13 12094.37 00:19:24.857 Received shutdown signal, test time was about 1.000000 seconds 00:19:24.857 00:19:24.857 Latency(us) 00:19:24.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.857 =================================================================================================================== 00:19:24.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.857 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:24.857 18:14:22 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:24.857 18:14:22 -- common/autotest_common.sh@1597 -- # read -r file 00:19:24.857 18:14:22 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:24.857 18:14:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:24.857 18:14:22 -- nvmf/common.sh@116 -- # sync 00:19:24.857 18:14:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:24.857 18:14:22 -- nvmf/common.sh@119 -- # set +e 00:19:24.857 18:14:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:24.857 18:14:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:24.857 rmmod nvme_tcp 00:19:24.857 rmmod nvme_fabrics 00:19:24.857 rmmod nvme_keyring 00:19:24.857 18:14:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:24.857 18:14:22 -- nvmf/common.sh@123 -- # set -e 00:19:24.857 18:14:22 -- nvmf/common.sh@124 -- # return 0 00:19:24.857 18:14:22 -- nvmf/common.sh@477 -- # '[' -n 80255 ']' 00:19:24.857 18:14:22 -- nvmf/common.sh@478 -- # killprocess 80255 00:19:24.857 18:14:22 -- common/autotest_common.sh@926 -- # '[' -z 80255 ']' 00:19:24.857 18:14:22 -- common/autotest_common.sh@930 -- # kill -0 80255 00:19:24.857 18:14:22 -- common/autotest_common.sh@931 -- # uname 00:19:24.857 18:14:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:24.857 18:14:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80255 00:19:24.857 18:14:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:24.857 18:14:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:24.857 killing process with pid 80255 00:19:24.857 18:14:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80255' 00:19:24.857 18:14:22 -- common/autotest_common.sh@945 -- # kill 80255 00:19:24.857 18:14:22 -- common/autotest_common.sh@950 -- # wait 80255 00:19:25.115 18:14:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:25.115 18:14:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:25.115 18:14:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:25.115 18:14:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.115 18:14:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:25.115 18:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.115 18:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.115 18:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.374 18:14:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:25.374 00:19:25.374 real 0m5.016s 00:19:25.374 user 0m15.733s 00:19:25.374 sys 0m1.160s 00:19:25.374 18:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.374 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.374 ************************************ 00:19:25.374 END TEST nvmf_multicontroller 00:19:25.374 ************************************ 00:19:25.374 18:14:23 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:25.374 18:14:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:25.374 18:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.374 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.374 ************************************ 00:19:25.374 START TEST nvmf_aer 00:19:25.374 ************************************ 00:19:25.374 18:14:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:25.374 * Looking for test storage... 00:19:25.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:25.374 18:14:23 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.374 18:14:23 -- nvmf/common.sh@7 -- # uname -s 00:19:25.374 18:14:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.374 18:14:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.374 18:14:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.374 18:14:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.374 18:14:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.374 18:14:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.374 18:14:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.374 18:14:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.374 18:14:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.374 18:14:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.374 18:14:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:25.374 18:14:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:25.374 18:14:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.374 18:14:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.374 18:14:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.374 18:14:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.374 18:14:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.374 18:14:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.374 18:14:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.374 18:14:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.374 18:14:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.374 18:14:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.374 18:14:23 -- paths/export.sh@5 -- # export PATH 00:19:25.374 18:14:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.374 18:14:23 -- nvmf/common.sh@46 -- # : 0 00:19:25.374 18:14:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:25.374 18:14:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:25.374 18:14:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:25.374 18:14:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.374 18:14:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.374 18:14:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:25.374 18:14:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:25.374 18:14:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:25.374 18:14:23 -- host/aer.sh@11 -- # nvmftestinit 00:19:25.374 18:14:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:25.375 18:14:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.375 18:14:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:25.375 18:14:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:25.375 18:14:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:25.375 18:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.375 18:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.375 18:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.375 18:14:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:25.375 18:14:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:25.375 18:14:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:25.375 18:14:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:25.375 18:14:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:25.375 18:14:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:25.375 18:14:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.375 18:14:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.375 18:14:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:25.375 18:14:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:25.375 18:14:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.375 18:14:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.375 18:14:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.375 18:14:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.375 18:14:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.375 18:14:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.375 18:14:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.375 18:14:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.375 18:14:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:25.375 18:14:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:25.375 Cannot find device "nvmf_tgt_br" 00:19:25.375 18:14:23 -- nvmf/common.sh@154 -- # true 00:19:25.375 18:14:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.375 Cannot find device "nvmf_tgt_br2" 00:19:25.375 18:14:23 -- nvmf/common.sh@155 -- # true 00:19:25.375 18:14:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:25.375 18:14:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:25.375 Cannot find device "nvmf_tgt_br" 00:19:25.375 18:14:23 -- nvmf/common.sh@157 -- # true 00:19:25.375 18:14:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:25.375 Cannot find device "nvmf_tgt_br2" 00:19:25.375 18:14:23 -- nvmf/common.sh@158 -- # true 00:19:25.375 18:14:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:25.635 18:14:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:25.635 18:14:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.635 18:14:23 -- nvmf/common.sh@161 -- # true 00:19:25.635 18:14:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.635 18:14:23 -- nvmf/common.sh@162 -- # true 00:19:25.635 18:14:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.635 18:14:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.635 18:14:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.635 18:14:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.635 18:14:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.635 18:14:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:25.635 18:14:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:25.635 18:14:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:25.635 18:14:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:25.635 18:14:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:25.635 18:14:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:25.635 18:14:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:25.635 18:14:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:25.635 18:14:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.635 18:14:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.635 18:14:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.635 18:14:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:25.635 18:14:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:25.635 18:14:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.635 18:14:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.635 18:14:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.635 18:14:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.635 18:14:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.635 18:14:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:25.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:25.635 00:19:25.635 --- 10.0.0.2 ping statistics --- 00:19:25.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.635 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:25.635 18:14:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:25.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:19:25.636 00:19:25.636 --- 10.0.0.3 ping statistics --- 00:19:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.636 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:25.636 18:14:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:25.636 00:19:25.636 --- 10.0.0.1 ping statistics --- 00:19:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.636 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:25.636 18:14:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.636 18:14:23 -- nvmf/common.sh@421 -- # return 0 00:19:25.636 18:14:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.636 18:14:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.636 18:14:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:25.636 18:14:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:25.636 18:14:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.636 18:14:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:25.636 18:14:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:25.636 18:14:23 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:25.636 18:14:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.636 18:14:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:25.636 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.636 18:14:23 -- nvmf/common.sh@469 -- # nvmfpid=80554 00:19:25.636 18:14:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:25.636 18:14:23 -- nvmf/common.sh@470 -- # waitforlisten 80554 00:19:25.636 18:14:23 -- common/autotest_common.sh@819 -- # '[' -z 80554 ']' 00:19:25.636 18:14:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.636 18:14:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.636 18:14:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.636 18:14:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.636 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.893 [2024-04-25 18:14:23.611188] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:25.893 [2024-04-25 18:14:23.611288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.893 [2024-04-25 18:14:23.743121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.151 [2024-04-25 18:14:23.851559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:26.151 [2024-04-25 18:14:23.851736] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.151 [2024-04-25 18:14:23.851753] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.151 [2024-04-25 18:14:23.851763] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.152 [2024-04-25 18:14:23.851934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.152 [2024-04-25 18:14:23.852458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.152 [2024-04-25 18:14:23.852583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.152 [2024-04-25 18:14:23.852618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.722 18:14:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.722 18:14:24 -- common/autotest_common.sh@852 -- # return 0 00:19:26.722 18:14:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:26.722 18:14:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:26.722 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.722 18:14:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.722 18:14:24 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.722 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.722 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.722 [2024-04-25 18:14:24.653453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:26.979 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.979 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.979 Malloc0 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:26.979 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.979 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.979 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.979 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.979 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.979 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.979 [2024-04-25 18:14:24.726830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:26.979 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.979 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:26.979 [2024-04-25 18:14:24.734512] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:26.979 [ 00:19:26.979 { 00:19:26.979 "allow_any_host": true, 00:19:26.979 "hosts": [], 00:19:26.979 "listen_addresses": [], 00:19:26.979 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:26.979 "subtype": "Discovery" 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "allow_any_host": true, 00:19:26.979 "hosts": [], 00:19:26.979 "listen_addresses": [ 00:19:26.979 { 00:19:26.979 "adrfam": "IPv4", 00:19:26.979 "traddr": "10.0.0.2", 00:19:26.979 "transport": "TCP", 00:19:26.979 "trsvcid": "4420", 00:19:26.979 "trtype": "TCP" 00:19:26.979 } 00:19:26.979 ], 00:19:26.979 "max_cntlid": 65519, 00:19:26.979 "max_namespaces": 2, 00:19:26.979 "min_cntlid": 1, 00:19:26.979 "model_number": "SPDK bdev Controller", 00:19:26.979 "namespaces": [ 00:19:26.979 { 00:19:26.979 "bdev_name": "Malloc0", 00:19:26.979 "name": "Malloc0", 00:19:26.979 "nguid": "5F96B09C8BEC47C2A8375AE6C49B4CFC", 00:19:26.979 "nsid": 1, 00:19:26.979 "uuid": "5f96b09c-8bec-47c2-a837-5ae6c49b4cfc" 00:19:26.979 } 00:19:26.979 ], 00:19:26.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.979 "serial_number": "SPDK00000000000001", 00:19:26.979 "subtype": "NVMe" 00:19:26.979 } 00:19:26.979 ] 00:19:26.979 18:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.979 18:14:24 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:26.979 18:14:24 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:26.979 18:14:24 -- host/aer.sh@33 -- # aerpid=80608 00:19:26.979 18:14:24 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:26.979 18:14:24 -- common/autotest_common.sh@1244 -- # local i=0 00:19:26.979 18:14:24 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:26.979 18:14:24 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:26.979 18:14:24 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:26.979 18:14:24 -- common/autotest_common.sh@1247 -- # i=1 00:19:26.979 18:14:24 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:26.979 18:14:24 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:26.979 18:14:24 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:26.979 18:14:24 -- common/autotest_common.sh@1247 -- # i=2 00:19:26.979 18:14:24 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:27.236 18:14:24 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:27.236 18:14:24 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:27.236 18:14:24 -- common/autotest_common.sh@1255 -- # return 0 00:19:27.236 18:14:24 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:27.236 18:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.236 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:19:27.236 Malloc1 00:19:27.236 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.236 18:14:25 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:27.236 18:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.236 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.236 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.236 18:14:25 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:27.236 18:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.236 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.236 Asynchronous Event Request test 00:19:27.236 Attaching to 10.0.0.2 00:19:27.236 Attached to 10.0.0.2 00:19:27.236 Registering asynchronous event callbacks... 00:19:27.236 Starting namespace attribute notice tests for all controllers... 00:19:27.236 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:27.236 aer_cb - Changed Namespace 00:19:27.236 Cleaning up... 00:19:27.236 [ 00:19:27.236 { 00:19:27.236 "allow_any_host": true, 00:19:27.236 "hosts": [], 00:19:27.236 "listen_addresses": [], 00:19:27.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:27.236 "subtype": "Discovery" 00:19:27.236 }, 00:19:27.236 { 00:19:27.236 "allow_any_host": true, 00:19:27.236 "hosts": [], 00:19:27.236 "listen_addresses": [ 00:19:27.237 { 00:19:27.237 "adrfam": "IPv4", 00:19:27.237 "traddr": "10.0.0.2", 00:19:27.237 "transport": "TCP", 00:19:27.237 "trsvcid": "4420", 00:19:27.237 "trtype": "TCP" 00:19:27.237 } 00:19:27.237 ], 00:19:27.237 "max_cntlid": 65519, 00:19:27.237 "max_namespaces": 2, 00:19:27.237 "min_cntlid": 1, 00:19:27.237 "model_number": "SPDK bdev Controller", 00:19:27.237 "namespaces": [ 00:19:27.237 { 00:19:27.237 "bdev_name": "Malloc0", 00:19:27.237 "name": "Malloc0", 00:19:27.237 "nguid": "5F96B09C8BEC47C2A8375AE6C49B4CFC", 00:19:27.237 "nsid": 1, 00:19:27.237 "uuid": "5f96b09c-8bec-47c2-a837-5ae6c49b4cfc" 00:19:27.237 }, 00:19:27.237 { 00:19:27.237 "bdev_name": "Malloc1", 00:19:27.237 "name": "Malloc1", 00:19:27.237 "nguid": "CC2833318B044921AF059A2AC1596BB4", 00:19:27.237 "nsid": 2, 00:19:27.237 "uuid": "cc283331-8b04-4921-af05-9a2ac1596bb4" 00:19:27.237 } 00:19:27.237 ], 00:19:27.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.237 "serial_number": "SPDK00000000000001", 00:19:27.237 "subtype": "NVMe" 00:19:27.237 } 00:19:27.237 ] 00:19:27.237 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.237 18:14:25 -- host/aer.sh@43 -- # wait 80608 00:19:27.237 18:14:25 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:27.237 18:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.237 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.237 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.237 18:14:25 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:27.237 18:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.237 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.237 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.237 18:14:25 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.237 18:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.237 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.237 18:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.237 18:14:25 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:27.237 18:14:25 -- host/aer.sh@51 -- # nvmftestfini 00:19:27.237 18:14:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:27.237 18:14:25 -- nvmf/common.sh@116 -- # sync 00:19:27.495 18:14:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:27.495 18:14:25 -- nvmf/common.sh@119 -- # set +e 00:19:27.495 18:14:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:27.495 18:14:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:27.495 rmmod nvme_tcp 00:19:27.495 rmmod nvme_fabrics 00:19:27.495 rmmod nvme_keyring 00:19:27.495 18:14:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:27.495 18:14:25 -- nvmf/common.sh@123 -- # set -e 00:19:27.495 18:14:25 -- nvmf/common.sh@124 -- # return 0 00:19:27.495 18:14:25 -- nvmf/common.sh@477 -- # '[' -n 80554 ']' 00:19:27.495 18:14:25 -- nvmf/common.sh@478 -- # killprocess 80554 00:19:27.495 18:14:25 -- common/autotest_common.sh@926 -- # '[' -z 80554 ']' 00:19:27.495 18:14:25 -- common/autotest_common.sh@930 -- # kill -0 80554 00:19:27.495 18:14:25 -- common/autotest_common.sh@931 -- # uname 00:19:27.495 18:14:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.495 18:14:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80554 00:19:27.495 18:14:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:27.495 18:14:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:27.495 18:14:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80554' 00:19:27.495 killing process with pid 80554 00:19:27.495 18:14:25 -- common/autotest_common.sh@945 -- # kill 80554 00:19:27.495 [2024-04-25 18:14:25.263689] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:27.495 18:14:25 -- common/autotest_common.sh@950 -- # wait 80554 00:19:27.752 18:14:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:27.752 18:14:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:27.752 18:14:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:27.752 18:14:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.753 18:14:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:27.753 18:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.753 18:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.753 18:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.753 18:14:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:27.753 00:19:27.753 real 0m2.494s 00:19:27.753 user 0m6.887s 00:19:27.753 sys 0m0.692s 00:19:27.753 18:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.753 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.753 ************************************ 00:19:27.753 END TEST nvmf_aer 00:19:27.753 ************************************ 00:19:27.753 18:14:25 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:27.753 18:14:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:27.753 18:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:27.753 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.753 ************************************ 00:19:27.753 START TEST nvmf_async_init 00:19:27.753 ************************************ 00:19:27.753 18:14:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:28.011 * Looking for test storage... 00:19:28.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:28.011 18:14:25 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.011 18:14:25 -- nvmf/common.sh@7 -- # uname -s 00:19:28.011 18:14:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.011 18:14:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.011 18:14:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.011 18:14:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.011 18:14:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.011 18:14:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.011 18:14:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.011 18:14:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.011 18:14:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.011 18:14:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:28.011 18:14:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:28.011 18:14:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.011 18:14:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.011 18:14:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.011 18:14:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.011 18:14:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.011 18:14:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.011 18:14:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.011 18:14:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.011 18:14:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.011 18:14:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.011 18:14:25 -- paths/export.sh@5 -- # export PATH 00:19:28.011 18:14:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.011 18:14:25 -- nvmf/common.sh@46 -- # : 0 00:19:28.011 18:14:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:28.011 18:14:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:28.011 18:14:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:28.011 18:14:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.011 18:14:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.011 18:14:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:28.011 18:14:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:28.011 18:14:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:28.011 18:14:25 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:28.011 18:14:25 -- host/async_init.sh@14 -- # null_block_size=512 00:19:28.011 18:14:25 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:28.011 18:14:25 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:28.011 18:14:25 -- host/async_init.sh@20 -- # uuidgen 00:19:28.011 18:14:25 -- host/async_init.sh@20 -- # tr -d - 00:19:28.011 18:14:25 -- host/async_init.sh@20 -- # nguid=908383a333e44a94be4312620317a1c3 00:19:28.011 18:14:25 -- host/async_init.sh@22 -- # nvmftestinit 00:19:28.011 18:14:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:28.011 18:14:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.011 18:14:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:28.011 18:14:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:28.011 18:14:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:28.011 18:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.011 18:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.011 18:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.011 18:14:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:28.011 18:14:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:28.011 18:14:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.011 18:14:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.011 18:14:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:28.011 18:14:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:28.011 18:14:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.011 18:14:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.011 18:14:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.011 18:14:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.011 18:14:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.011 18:14:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.012 18:14:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.012 18:14:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.012 18:14:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:28.012 18:14:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:28.012 Cannot find device "nvmf_tgt_br" 00:19:28.012 18:14:25 -- nvmf/common.sh@154 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.012 Cannot find device "nvmf_tgt_br2" 00:19:28.012 18:14:25 -- nvmf/common.sh@155 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:28.012 18:14:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:28.012 Cannot find device "nvmf_tgt_br" 00:19:28.012 18:14:25 -- nvmf/common.sh@157 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:28.012 Cannot find device "nvmf_tgt_br2" 00:19:28.012 18:14:25 -- nvmf/common.sh@158 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:28.012 18:14:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:28.012 18:14:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.012 18:14:25 -- nvmf/common.sh@161 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.012 18:14:25 -- nvmf/common.sh@162 -- # true 00:19:28.012 18:14:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.012 18:14:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.012 18:14:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.012 18:14:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.270 18:14:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.270 18:14:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.270 18:14:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.270 18:14:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:28.270 18:14:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:28.270 18:14:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:28.270 18:14:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:28.270 18:14:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:28.270 18:14:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:28.270 18:14:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.270 18:14:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.270 18:14:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.270 18:14:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:28.270 18:14:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:28.270 18:14:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.270 18:14:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.270 18:14:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.270 18:14:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.270 18:14:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.270 18:14:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:28.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:28.270 00:19:28.270 --- 10.0.0.2 ping statistics --- 00:19:28.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.270 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:28.270 18:14:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:28.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:28.270 00:19:28.270 --- 10.0.0.3 ping statistics --- 00:19:28.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.270 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:28.270 18:14:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:28.270 00:19:28.270 --- 10.0.0.1 ping statistics --- 00:19:28.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.270 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:28.270 18:14:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.270 18:14:26 -- nvmf/common.sh@421 -- # return 0 00:19:28.270 18:14:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.270 18:14:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.270 18:14:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:28.270 18:14:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:28.270 18:14:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.270 18:14:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:28.270 18:14:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:28.270 18:14:26 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:28.270 18:14:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.270 18:14:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:28.270 18:14:26 -- common/autotest_common.sh@10 -- # set +x 00:19:28.270 18:14:26 -- nvmf/common.sh@469 -- # nvmfpid=80784 00:19:28.270 18:14:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:28.270 18:14:26 -- nvmf/common.sh@470 -- # waitforlisten 80784 00:19:28.271 18:14:26 -- common/autotest_common.sh@819 -- # '[' -z 80784 ']' 00:19:28.271 18:14:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.271 18:14:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.271 18:14:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.271 18:14:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.271 18:14:26 -- common/autotest_common.sh@10 -- # set +x 00:19:28.271 [2024-04-25 18:14:26.193981] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:28.271 [2024-04-25 18:14:26.194068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.529 [2024-04-25 18:14:26.330697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.529 [2024-04-25 18:14:26.430844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.529 [2024-04-25 18:14:26.431010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.529 [2024-04-25 18:14:26.431025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.529 [2024-04-25 18:14:26.431035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.529 [2024-04-25 18:14:26.431072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.465 18:14:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:29.465 18:14:27 -- common/autotest_common.sh@852 -- # return 0 00:19:29.465 18:14:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:29.465 18:14:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 18:14:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.465 18:14:27 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 [2024-04-25 18:14:27.203535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 null0 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 908383a333e44a94be4312620317a1c3 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.465 [2024-04-25 18:14:27.243696] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.465 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.465 18:14:27 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:29.465 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.465 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.723 nvme0n1 00:19:29.723 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.723 18:14:27 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:29.723 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.723 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.723 [ 00:19:29.723 { 00:19:29.723 "aliases": [ 00:19:29.723 "908383a3-33e4-4a94-be43-12620317a1c3" 00:19:29.723 ], 00:19:29.723 "assigned_rate_limits": { 00:19:29.723 "r_mbytes_per_sec": 0, 00:19:29.723 "rw_ios_per_sec": 0, 00:19:29.723 "rw_mbytes_per_sec": 0, 00:19:29.723 "w_mbytes_per_sec": 0 00:19:29.723 }, 00:19:29.723 "block_size": 512, 00:19:29.723 "claimed": false, 00:19:29.723 "driver_specific": { 00:19:29.723 "mp_policy": "active_passive", 00:19:29.723 "nvme": [ 00:19:29.723 { 00:19:29.723 "ctrlr_data": { 00:19:29.723 "ana_reporting": false, 00:19:29.723 "cntlid": 1, 00:19:29.723 "firmware_revision": "24.01.1", 00:19:29.723 "model_number": "SPDK bdev Controller", 00:19:29.723 "multi_ctrlr": true, 00:19:29.723 "oacs": { 00:19:29.723 "firmware": 0, 00:19:29.723 "format": 0, 00:19:29.723 "ns_manage": 0, 00:19:29.723 "security": 0 00:19:29.723 }, 00:19:29.723 "serial_number": "00000000000000000000", 00:19:29.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.723 "vendor_id": "0x8086" 00:19:29.723 }, 00:19:29.723 "ns_data": { 00:19:29.723 "can_share": true, 00:19:29.723 "id": 1 00:19:29.723 }, 00:19:29.723 "trid": { 00:19:29.723 "adrfam": "IPv4", 00:19:29.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.723 "traddr": "10.0.0.2", 00:19:29.723 "trsvcid": "4420", 00:19:29.723 "trtype": "TCP" 00:19:29.723 }, 00:19:29.723 "vs": { 00:19:29.723 "nvme_version": "1.3" 00:19:29.723 } 00:19:29.723 } 00:19:29.723 ] 00:19:29.723 }, 00:19:29.723 "name": "nvme0n1", 00:19:29.723 "num_blocks": 2097152, 00:19:29.723 "product_name": "NVMe disk", 00:19:29.723 "supported_io_types": { 00:19:29.723 "abort": true, 00:19:29.723 "compare": true, 00:19:29.723 "compare_and_write": true, 00:19:29.723 "flush": true, 00:19:29.723 "nvme_admin": true, 00:19:29.723 "nvme_io": true, 00:19:29.723 "read": true, 00:19:29.723 "reset": true, 00:19:29.723 "unmap": false, 00:19:29.723 "write": true, 00:19:29.723 "write_zeroes": true 00:19:29.723 }, 00:19:29.723 "uuid": "908383a3-33e4-4a94-be43-12620317a1c3", 00:19:29.723 "zoned": false 00:19:29.723 } 00:19:29.723 ] 00:19:29.723 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.723 18:14:27 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:29.723 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.723 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.723 [2024-04-25 18:14:27.499789] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:29.723 [2024-04-25 18:14:27.499872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91f7a0 (9): Bad file descriptor 00:19:29.723 [2024-04-25 18:14:27.631473] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:29.723 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.723 18:14:27 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:29.723 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.723 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.723 [ 00:19:29.723 { 00:19:29.723 "aliases": [ 00:19:29.723 "908383a3-33e4-4a94-be43-12620317a1c3" 00:19:29.723 ], 00:19:29.723 "assigned_rate_limits": { 00:19:29.723 "r_mbytes_per_sec": 0, 00:19:29.723 "rw_ios_per_sec": 0, 00:19:29.723 "rw_mbytes_per_sec": 0, 00:19:29.723 "w_mbytes_per_sec": 0 00:19:29.723 }, 00:19:29.723 "block_size": 512, 00:19:29.723 "claimed": false, 00:19:29.723 "driver_specific": { 00:19:29.723 "mp_policy": "active_passive", 00:19:29.723 "nvme": [ 00:19:29.723 { 00:19:29.723 "ctrlr_data": { 00:19:29.723 "ana_reporting": false, 00:19:29.723 "cntlid": 2, 00:19:29.723 "firmware_revision": "24.01.1", 00:19:29.723 "model_number": "SPDK bdev Controller", 00:19:29.723 "multi_ctrlr": true, 00:19:29.723 "oacs": { 00:19:29.723 "firmware": 0, 00:19:29.723 "format": 0, 00:19:29.723 "ns_manage": 0, 00:19:29.723 "security": 0 00:19:29.723 }, 00:19:29.723 "serial_number": "00000000000000000000", 00:19:29.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.723 "vendor_id": "0x8086" 00:19:29.723 }, 00:19:29.723 "ns_data": { 00:19:29.723 "can_share": true, 00:19:29.723 "id": 1 00:19:29.723 }, 00:19:29.723 "trid": { 00:19:29.723 "adrfam": "IPv4", 00:19:29.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.723 "traddr": "10.0.0.2", 00:19:29.723 "trsvcid": "4420", 00:19:29.723 "trtype": "TCP" 00:19:29.723 }, 00:19:29.723 "vs": { 00:19:29.723 "nvme_version": "1.3" 00:19:29.723 } 00:19:29.723 } 00:19:29.723 ] 00:19:29.723 }, 00:19:29.723 "name": "nvme0n1", 00:19:29.723 "num_blocks": 2097152, 00:19:29.723 "product_name": "NVMe disk", 00:19:29.723 "supported_io_types": { 00:19:29.723 "abort": true, 00:19:29.723 "compare": true, 00:19:29.723 "compare_and_write": true, 00:19:29.723 "flush": true, 00:19:29.723 "nvme_admin": true, 00:19:29.723 "nvme_io": true, 00:19:29.723 "read": true, 00:19:29.723 "reset": true, 00:19:29.723 "unmap": false, 00:19:29.723 "write": true, 00:19:29.723 "write_zeroes": true 00:19:29.723 }, 00:19:29.723 "uuid": "908383a3-33e4-4a94-be43-12620317a1c3", 00:19:29.723 "zoned": false 00:19:29.723 } 00:19:29.723 ] 00:19:29.723 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.723 18:14:27 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.723 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.723 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.981 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.981 18:14:27 -- host/async_init.sh@53 -- # mktemp 00:19:29.982 18:14:27 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4AnMhZLd6T 00:19:29.982 18:14:27 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:29.982 18:14:27 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4AnMhZLd6T 00:19:29.982 18:14:27 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 [2024-04-25 18:14:27.691960] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.982 [2024-04-25 18:14:27.692113] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4AnMhZLd6T 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4AnMhZLd6T 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 [2024-04-25 18:14:27.707918] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.982 nvme0n1 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 [ 00:19:29.982 { 00:19:29.982 "aliases": [ 00:19:29.982 "908383a3-33e4-4a94-be43-12620317a1c3" 00:19:29.982 ], 00:19:29.982 "assigned_rate_limits": { 00:19:29.982 "r_mbytes_per_sec": 0, 00:19:29.982 "rw_ios_per_sec": 0, 00:19:29.982 "rw_mbytes_per_sec": 0, 00:19:29.982 "w_mbytes_per_sec": 0 00:19:29.982 }, 00:19:29.982 "block_size": 512, 00:19:29.982 "claimed": false, 00:19:29.982 "driver_specific": { 00:19:29.982 "mp_policy": "active_passive", 00:19:29.982 "nvme": [ 00:19:29.982 { 00:19:29.982 "ctrlr_data": { 00:19:29.982 "ana_reporting": false, 00:19:29.982 "cntlid": 3, 00:19:29.982 "firmware_revision": "24.01.1", 00:19:29.982 "model_number": "SPDK bdev Controller", 00:19:29.982 "multi_ctrlr": true, 00:19:29.982 "oacs": { 00:19:29.982 "firmware": 0, 00:19:29.982 "format": 0, 00:19:29.982 "ns_manage": 0, 00:19:29.982 "security": 0 00:19:29.982 }, 00:19:29.982 "serial_number": "00000000000000000000", 00:19:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.982 "vendor_id": "0x8086" 00:19:29.982 }, 00:19:29.982 "ns_data": { 00:19:29.982 "can_share": true, 00:19:29.982 "id": 1 00:19:29.982 }, 00:19:29.982 "trid": { 00:19:29.982 "adrfam": "IPv4", 00:19:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.982 "traddr": "10.0.0.2", 00:19:29.982 "trsvcid": "4421", 00:19:29.982 "trtype": "TCP" 00:19:29.982 }, 00:19:29.982 "vs": { 00:19:29.982 "nvme_version": "1.3" 00:19:29.982 } 00:19:29.982 } 00:19:29.982 ] 00:19:29.982 }, 00:19:29.982 "name": "nvme0n1", 00:19:29.982 "num_blocks": 2097152, 00:19:29.982 "product_name": "NVMe disk", 00:19:29.982 "supported_io_types": { 00:19:29.982 "abort": true, 00:19:29.982 "compare": true, 00:19:29.982 "compare_and_write": true, 00:19:29.982 "flush": true, 00:19:29.982 "nvme_admin": true, 00:19:29.982 "nvme_io": true, 00:19:29.982 "read": true, 00:19:29.982 "reset": true, 00:19:29.982 "unmap": false, 00:19:29.982 "write": true, 00:19:29.982 "write_zeroes": true 00:19:29.982 }, 00:19:29.982 "uuid": "908383a3-33e4-4a94-be43-12620317a1c3", 00:19:29.982 "zoned": false 00:19:29.982 } 00:19:29.982 ] 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.982 18:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.982 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 18:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.982 18:14:27 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4AnMhZLd6T 00:19:29.982 18:14:27 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:29.982 18:14:27 -- host/async_init.sh@78 -- # nvmftestfini 00:19:29.982 18:14:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.982 18:14:27 -- nvmf/common.sh@116 -- # sync 00:19:29.982 18:14:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.982 18:14:27 -- nvmf/common.sh@119 -- # set +e 00:19:29.982 18:14:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.982 18:14:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.982 rmmod nvme_tcp 00:19:29.982 rmmod nvme_fabrics 00:19:29.982 rmmod nvme_keyring 00:19:29.982 18:14:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.241 18:14:27 -- nvmf/common.sh@123 -- # set -e 00:19:30.241 18:14:27 -- nvmf/common.sh@124 -- # return 0 00:19:30.241 18:14:27 -- nvmf/common.sh@477 -- # '[' -n 80784 ']' 00:19:30.241 18:14:27 -- nvmf/common.sh@478 -- # killprocess 80784 00:19:30.241 18:14:27 -- common/autotest_common.sh@926 -- # '[' -z 80784 ']' 00:19:30.241 18:14:27 -- common/autotest_common.sh@930 -- # kill -0 80784 00:19:30.241 18:14:27 -- common/autotest_common.sh@931 -- # uname 00:19:30.241 18:14:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.241 18:14:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80784 00:19:30.241 18:14:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:30.241 18:14:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:30.241 18:14:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80784' 00:19:30.241 killing process with pid 80784 00:19:30.241 18:14:27 -- common/autotest_common.sh@945 -- # kill 80784 00:19:30.241 18:14:27 -- common/autotest_common.sh@950 -- # wait 80784 00:19:30.500 18:14:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:30.500 18:14:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:30.500 18:14:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:30.500 18:14:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.500 18:14:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:30.500 18:14:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.500 18:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.500 18:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.500 18:14:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:30.500 00:19:30.500 real 0m2.614s 00:19:30.500 user 0m2.384s 00:19:30.500 sys 0m0.652s 00:19:30.500 18:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.500 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.500 ************************************ 00:19:30.500 END TEST nvmf_async_init 00:19:30.500 ************************************ 00:19:30.500 18:14:28 -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:30.500 18:14:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.500 18:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.500 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.500 ************************************ 00:19:30.500 START TEST dma 00:19:30.500 ************************************ 00:19:30.500 18:14:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:30.500 * Looking for test storage... 00:19:30.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.500 18:14:28 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.500 18:14:28 -- nvmf/common.sh@7 -- # uname -s 00:19:30.500 18:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.500 18:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.500 18:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.500 18:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.500 18:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.500 18:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.500 18:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.500 18:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.500 18:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.500 18:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.500 18:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:30.500 18:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:30.500 18:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.500 18:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.500 18:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.500 18:14:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.500 18:14:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.500 18:14:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.500 18:14:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.500 18:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.500 18:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.500 18:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.500 18:14:28 -- paths/export.sh@5 -- # export PATH 00:19:30.501 18:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.501 18:14:28 -- nvmf/common.sh@46 -- # : 0 00:19:30.501 18:14:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.501 18:14:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.501 18:14:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.501 18:14:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.501 18:14:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.501 18:14:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.501 18:14:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.501 18:14:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.760 18:14:28 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:30.760 18:14:28 -- host/dma.sh@13 -- # exit 0 00:19:30.760 00:19:30.760 real 0m0.095s 00:19:30.760 user 0m0.039s 00:19:30.760 sys 0m0.063s 00:19:30.760 18:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.760 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.760 ************************************ 00:19:30.760 END TEST dma 00:19:30.760 ************************************ 00:19:30.760 18:14:28 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:30.760 18:14:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.761 18:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.761 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.761 ************************************ 00:19:30.761 START TEST nvmf_identify 00:19:30.761 ************************************ 00:19:30.761 18:14:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:30.761 * Looking for test storage... 00:19:30.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.761 18:14:28 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.761 18:14:28 -- nvmf/common.sh@7 -- # uname -s 00:19:30.761 18:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.761 18:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.761 18:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.761 18:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.761 18:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.761 18:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.761 18:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.761 18:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.761 18:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.761 18:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:30.761 18:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:30.761 18:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.761 18:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.761 18:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.761 18:14:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.761 18:14:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.761 18:14:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.761 18:14:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.761 18:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.761 18:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.761 18:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.761 18:14:28 -- paths/export.sh@5 -- # export PATH 00:19:30.761 18:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.761 18:14:28 -- nvmf/common.sh@46 -- # : 0 00:19:30.761 18:14:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.761 18:14:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.761 18:14:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.761 18:14:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.761 18:14:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.761 18:14:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.761 18:14:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.761 18:14:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.761 18:14:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.761 18:14:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.761 18:14:28 -- host/identify.sh@14 -- # nvmftestinit 00:19:30.761 18:14:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.761 18:14:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.761 18:14:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.761 18:14:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.761 18:14:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.761 18:14:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.761 18:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.761 18:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.761 18:14:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:30.761 18:14:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:30.761 18:14:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.761 18:14:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.761 18:14:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.761 18:14:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:30.761 18:14:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.761 18:14:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.761 18:14:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.761 18:14:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.761 18:14:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.761 18:14:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.761 18:14:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.761 18:14:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.761 18:14:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:30.761 18:14:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:30.761 Cannot find device "nvmf_tgt_br" 00:19:30.761 18:14:28 -- nvmf/common.sh@154 -- # true 00:19:30.761 18:14:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.761 Cannot find device "nvmf_tgt_br2" 00:19:30.761 18:14:28 -- nvmf/common.sh@155 -- # true 00:19:30.761 18:14:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:30.761 18:14:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:30.761 Cannot find device "nvmf_tgt_br" 00:19:30.761 18:14:28 -- nvmf/common.sh@157 -- # true 00:19:30.761 18:14:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:30.761 Cannot find device "nvmf_tgt_br2" 00:19:30.761 18:14:28 -- nvmf/common.sh@158 -- # true 00:19:30.761 18:14:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:31.021 18:14:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:31.021 18:14:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.021 18:14:28 -- nvmf/common.sh@161 -- # true 00:19:31.021 18:14:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.021 18:14:28 -- nvmf/common.sh@162 -- # true 00:19:31.021 18:14:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.021 18:14:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.021 18:14:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.021 18:14:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.021 18:14:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.021 18:14:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.021 18:14:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.021 18:14:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:31.021 18:14:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:31.021 18:14:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:31.021 18:14:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:31.021 18:14:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:31.021 18:14:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:31.021 18:14:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.021 18:14:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.021 18:14:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.021 18:14:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:31.021 18:14:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:31.021 18:14:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.021 18:14:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.021 18:14:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.021 18:14:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.021 18:14:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.021 18:14:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:31.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:31.021 00:19:31.021 --- 10.0.0.2 ping statistics --- 00:19:31.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.021 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:31.021 18:14:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:31.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:31.021 00:19:31.021 --- 10.0.0.3 ping statistics --- 00:19:31.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.021 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:31.021 18:14:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:31.021 00:19:31.021 --- 10.0.0.1 ping statistics --- 00:19:31.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.021 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:31.021 18:14:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.021 18:14:28 -- nvmf/common.sh@421 -- # return 0 00:19:31.021 18:14:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:31.021 18:14:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.021 18:14:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:31.021 18:14:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:31.021 18:14:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.021 18:14:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:31.021 18:14:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:31.280 18:14:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:31.280 18:14:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:31.280 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:31.280 18:14:28 -- host/identify.sh@19 -- # nvmfpid=81046 00:19:31.280 18:14:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.280 18:14:28 -- host/identify.sh@23 -- # waitforlisten 81046 00:19:31.280 18:14:28 -- common/autotest_common.sh@819 -- # '[' -z 81046 ']' 00:19:31.280 18:14:28 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:31.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.280 18:14:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.280 18:14:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:31.280 18:14:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.280 18:14:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:31.280 18:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:31.280 [2024-04-25 18:14:29.024381] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:31.280 [2024-04-25 18:14:29.024479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.280 [2024-04-25 18:14:29.161258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.539 [2024-04-25 18:14:29.273340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.539 [2024-04-25 18:14:29.273836] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.539 [2024-04-25 18:14:29.273971] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.539 [2024-04-25 18:14:29.274119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.539 [2024-04-25 18:14:29.274498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.539 [2024-04-25 18:14:29.274713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.539 [2024-04-25 18:14:29.274845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.539 [2024-04-25 18:14:29.274872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.106 18:14:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:32.106 18:14:29 -- common/autotest_common.sh@852 -- # return 0 00:19:32.106 18:14:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.106 18:14:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.106 18:14:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.106 [2024-04-25 18:14:29.989418] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.106 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.106 18:14:30 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:32.106 18:14:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:32.106 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 18:14:30 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 Malloc0 00:19:32.366 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.366 18:14:30 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.366 18:14:30 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.366 18:14:30 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 [2024-04-25 18:14:30.105509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.366 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.366 18:14:30 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.366 18:14:30 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:32.366 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.366 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.366 [2024-04-25 18:14:30.121155] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:32.366 [ 00:19:32.366 { 00:19:32.366 "allow_any_host": true, 00:19:32.366 "hosts": [], 00:19:32.366 "listen_addresses": [ 00:19:32.366 { 00:19:32.366 "adrfam": "IPv4", 00:19:32.366 "traddr": "10.0.0.2", 00:19:32.366 "transport": "TCP", 00:19:32.366 "trsvcid": "4420", 00:19:32.366 "trtype": "TCP" 00:19:32.366 } 00:19:32.366 ], 00:19:32.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:32.366 "subtype": "Discovery" 00:19:32.366 }, 00:19:32.366 { 00:19:32.366 "allow_any_host": true, 00:19:32.366 "hosts": [], 00:19:32.366 "listen_addresses": [ 00:19:32.366 { 00:19:32.366 "adrfam": "IPv4", 00:19:32.366 "traddr": "10.0.0.2", 00:19:32.366 "transport": "TCP", 00:19:32.366 "trsvcid": "4420", 00:19:32.366 "trtype": "TCP" 00:19:32.366 } 00:19:32.366 ], 00:19:32.366 "max_cntlid": 65519, 00:19:32.367 "max_namespaces": 32, 00:19:32.367 "min_cntlid": 1, 00:19:32.367 "model_number": "SPDK bdev Controller", 00:19:32.367 "namespaces": [ 00:19:32.367 { 00:19:32.367 "bdev_name": "Malloc0", 00:19:32.367 "eui64": "ABCDEF0123456789", 00:19:32.367 "name": "Malloc0", 00:19:32.367 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:32.367 "nsid": 1, 00:19:32.367 "uuid": "124930f9-91ea-4d45-b079-459a2b9ef550" 00:19:32.367 } 00:19:32.367 ], 00:19:32.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.367 "serial_number": "SPDK00000000000001", 00:19:32.367 "subtype": "NVMe" 00:19:32.367 } 00:19:32.367 ] 00:19:32.367 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.367 18:14:30 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:32.367 [2024-04-25 18:14:30.150705] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:32.367 [2024-04-25 18:14:30.150756] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81105 ] 00:19:32.367 [2024-04-25 18:14:30.282236] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:32.367 [2024-04-25 18:14:30.282340] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:32.367 [2024-04-25 18:14:30.282351] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:32.367 [2024-04-25 18:14:30.282366] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:32.367 [2024-04-25 18:14:30.282379] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:32.367 [2024-04-25 18:14:30.282581] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:32.367 [2024-04-25 18:14:30.282670] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d40270 0 00:19:32.367 [2024-04-25 18:14:30.287293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:32.367 [2024-04-25 18:14:30.287319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:32.367 [2024-04-25 18:14:30.287326] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:32.367 [2024-04-25 18:14:30.287330] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:32.367 [2024-04-25 18:14:30.287384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.287394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.287398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.287413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:32.367 [2024-04-25 18:14:30.287448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.295313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.295335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.295340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.295358] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:32.367 [2024-04-25 18:14:30.295366] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:32.367 [2024-04-25 18:14:30.295373] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:32.367 [2024-04-25 18:14:30.295396] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.295415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.295448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.295520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.295528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.295532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.295548] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:32.367 [2024-04-25 18:14:30.295557] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:32.367 [2024-04-25 18:14:30.295565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.295581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.295612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.295702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.295709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.295713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295717] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.295724] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:32.367 [2024-04-25 18:14:30.295733] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.295741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.295756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.295779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.295832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.295839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.295843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.295854] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.295864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.295880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.295912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.295983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.295990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.295994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.295998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.296004] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:32.367 [2024-04-25 18:14:30.296009] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.296017] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.296123] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:32.367 [2024-04-25 18:14:30.296129] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.296139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.296154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.296178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.296234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.296242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.296245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.367 [2024-04-25 18:14:30.296255] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:32.367 [2024-04-25 18:14:30.296266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.367 [2024-04-25 18:14:30.296297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.367 [2024-04-25 18:14:30.296323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.367 [2024-04-25 18:14:30.296407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.367 [2024-04-25 18:14:30.296414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.367 [2024-04-25 18:14:30.296418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.367 [2024-04-25 18:14:30.296421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.368 [2024-04-25 18:14:30.296427] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:32.368 [2024-04-25 18:14:30.296432] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.296446] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:32.368 [2024-04-25 18:14:30.296458] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.296474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.368 [2024-04-25 18:14:30.296515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.368 [2024-04-25 18:14:30.296611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.368 [2024-04-25 18:14:30.296618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.368 [2024-04-25 18:14:30.296622] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296627] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d40270): datao=0, datal=4096, cccid=0 00:19:32.368 [2024-04-25 18:14:30.296632] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d7f6d0) on tqpair(0x1d40270): expected_datao=0, payload_size=4096 00:19:32.368 [2024-04-25 18:14:30.296641] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296646] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.368 [2024-04-25 18:14:30.296661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.368 [2024-04-25 18:14:30.296665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.368 [2024-04-25 18:14:30.296691] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:32.368 [2024-04-25 18:14:30.296702] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:32.368 [2024-04-25 18:14:30.296706] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:32.368 [2024-04-25 18:14:30.296712] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:32.368 [2024-04-25 18:14:30.296717] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:32.368 [2024-04-25 18:14:30.296730] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.296739] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.296747] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.368 [2024-04-25 18:14:30.296788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.368 [2024-04-25 18:14:30.296873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.368 [2024-04-25 18:14:30.296881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.368 [2024-04-25 18:14:30.296885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7f6d0) on tqpair=0x1d40270 00:19:32.368 [2024-04-25 18:14:30.296898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.368 [2024-04-25 18:14:30.296919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.368 [2024-04-25 18:14:30.296938] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.368 [2024-04-25 18:14:30.296957] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.296964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.296969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.368 [2024-04-25 18:14:30.296975] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.296998] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:32.368 [2024-04-25 18:14:30.297006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.297020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.368 [2024-04-25 18:14:30.297046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f6d0, cid 0, qid 0 00:19:32.368 [2024-04-25 18:14:30.297054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f830, cid 1, qid 0 00:19:32.368 [2024-04-25 18:14:30.297059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7f990, cid 2, qid 0 00:19:32.368 [2024-04-25 18:14:30.297063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.368 [2024-04-25 18:14:30.297067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fc50, cid 4, qid 0 00:19:32.368 [2024-04-25 18:14:30.297175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.368 [2024-04-25 18:14:30.297193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.368 [2024-04-25 18:14:30.297198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fc50) on tqpair=0x1d40270 00:19:32.368 [2024-04-25 18:14:30.297209] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:32.368 [2024-04-25 18:14:30.297214] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:32.368 [2024-04-25 18:14:30.297227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.297243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.368 [2024-04-25 18:14:30.297267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fc50, cid 4, qid 0 00:19:32.368 [2024-04-25 18:14:30.297367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.368 [2024-04-25 18:14:30.297376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.368 [2024-04-25 18:14:30.297380] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297384] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d40270): datao=0, datal=4096, cccid=4 00:19:32.368 [2024-04-25 18:14:30.297389] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d7fc50) on tqpair(0x1d40270): expected_datao=0, payload_size=4096 00:19:32.368 [2024-04-25 18:14:30.297396] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297400] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.368 [2024-04-25 18:14:30.297415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.368 [2024-04-25 18:14:30.297419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fc50) on tqpair=0x1d40270 00:19:32.368 [2024-04-25 18:14:30.297438] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:32.368 [2024-04-25 18:14:30.297464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.297480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.368 [2024-04-25 18:14:30.297488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.368 [2024-04-25 18:14:30.297495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d40270) 00:19:32.368 [2024-04-25 18:14:30.297510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.368 [2024-04-25 18:14:30.297544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fc50, cid 4, qid 0 00:19:32.368 [2024-04-25 18:14:30.297553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fdb0, cid 5, qid 0 00:19:32.368 [2024-04-25 18:14:30.297674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.369 [2024-04-25 18:14:30.297682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.369 [2024-04-25 18:14:30.297686] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.369 [2024-04-25 18:14:30.297689] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d40270): datao=0, datal=1024, cccid=4 00:19:32.369 [2024-04-25 18:14:30.297693] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d7fc50) on tqpair(0x1d40270): expected_datao=0, payload_size=1024 00:19:32.369 [2024-04-25 18:14:30.297701] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.369 [2024-04-25 18:14:30.297704] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.369 [2024-04-25 18:14:30.297710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.369 [2024-04-25 18:14:30.297715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.369 [2024-04-25 18:14:30.297719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.369 [2024-04-25 18:14:30.297722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fdb0) on tqpair=0x1d40270 00:19:32.629 [2024-04-25 18:14:30.342340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.629 [2024-04-25 18:14:30.342364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.629 [2024-04-25 18:14:30.342369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fc50) on tqpair=0x1d40270 00:19:32.629 [2024-04-25 18:14:30.342391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d40270) 00:19:32.629 [2024-04-25 18:14:30.342410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.629 [2024-04-25 18:14:30.342449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fc50, cid 4, qid 0 00:19:32.629 [2024-04-25 18:14:30.342524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.629 [2024-04-25 18:14:30.342531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.629 [2024-04-25 18:14:30.342535] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342539] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d40270): datao=0, datal=3072, cccid=4 00:19:32.629 [2024-04-25 18:14:30.342544] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d7fc50) on tqpair(0x1d40270): expected_datao=0, payload_size=3072 00:19:32.629 [2024-04-25 18:14:30.342558] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342562] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.629 [2024-04-25 18:14:30.342577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.629 [2024-04-25 18:14:30.342581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fc50) on tqpair=0x1d40270 00:19:32.629 [2024-04-25 18:14:30.342597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.629 [2024-04-25 18:14:30.342605] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d40270) 00:19:32.629 [2024-04-25 18:14:30.342612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.629 [2024-04-25 18:14:30.342659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7fc50, cid 4, qid 0 00:19:32.630 [2024-04-25 18:14:30.342725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.630 [2024-04-25 18:14:30.342732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.630 [2024-04-25 18:14:30.342736] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.630 [2024-04-25 18:14:30.342740] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d40270): datao=0, datal=8, cccid=4 00:19:32.630 [2024-04-25 18:14:30.342744] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d7fc50) on tqpair(0x1d40270): expected_datao=0, payload_size=8 00:19:32.630 [2024-04-25 18:14:30.342751] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.630 [2024-04-25 18:14:30.342755] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.630 ===================================================== 00:19:32.630 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:32.630 ===================================================== 00:19:32.630 Controller Capabilities/Features 00:19:32.630 ================================ 00:19:32.630 Vendor ID: 0000 00:19:32.630 Subsystem Vendor ID: 0000 00:19:32.630 Serial Number: .................... 00:19:32.630 Model Number: ........................................ 00:19:32.630 Firmware Version: 24.01.1 00:19:32.630 Recommended Arb Burst: 0 00:19:32.630 IEEE OUI Identifier: 00 00 00 00:19:32.630 Multi-path I/O 00:19:32.630 May have multiple subsystem ports: No 00:19:32.630 May have multiple controllers: No 00:19:32.630 Associated with SR-IOV VF: No 00:19:32.630 Max Data Transfer Size: 131072 00:19:32.630 Max Number of Namespaces: 0 00:19:32.630 Max Number of I/O Queues: 1024 00:19:32.630 NVMe Specification Version (VS): 1.3 00:19:32.630 NVMe Specification Version (Identify): 1.3 00:19:32.630 Maximum Queue Entries: 128 00:19:32.630 Contiguous Queues Required: Yes 00:19:32.630 Arbitration Mechanisms Supported 00:19:32.630 Weighted Round Robin: Not Supported 00:19:32.630 Vendor Specific: Not Supported 00:19:32.630 Reset Timeout: 15000 ms 00:19:32.630 Doorbell Stride: 4 bytes 00:19:32.630 NVM Subsystem Reset: Not Supported 00:19:32.630 Command Sets Supported 00:19:32.630 NVM Command Set: Supported 00:19:32.630 Boot Partition: Not Supported 00:19:32.630 Memory Page Size Minimum: 4096 bytes 00:19:32.630 Memory Page Size Maximum: 4096 bytes 00:19:32.630 Persistent Memory Region: Not Supported 00:19:32.630 Optional Asynchronous Events Supported 00:19:32.630 Namespace Attribute Notices: Not Supported 00:19:32.630 Firmware Activation Notices: Not Supported 00:19:32.630 ANA Change Notices: Not Supported 00:19:32.630 PLE Aggregate Log Change Notices: Not Supported 00:19:32.630 LBA Status Info Alert Notices: Not Supported 00:19:32.630 EGE Aggregate Log Change Notices: Not Supported 00:19:32.630 Normal NVM Subsystem Shutdown event: Not Supported 00:19:32.630 Zone Descriptor Change Notices: Not Supported 00:19:32.630 Discovery Log Change Notices: Supported 00:19:32.630 Controller Attributes 00:19:32.630 128-bit Host Identifier: Not Supported 00:19:32.630 Non-Operational Permissive Mode: Not Supported 00:19:32.630 NVM Sets: Not Supported 00:19:32.630 Read Recovery Levels: Not Supported 00:19:32.630 Endurance Groups: Not Supported 00:19:32.630 Predictable Latency Mode: Not Supported 00:19:32.630 Traffic Based Keep ALive: Not Supported 00:19:32.630 Namespace Granularity: Not Supported 00:19:32.630 SQ Associations: Not Supported 00:19:32.630 UUID List: Not Supported 00:19:32.630 Multi-Domain Subsystem: Not Supported 00:19:32.630 Fixed Capacity Management: Not Supported 00:19:32.630 Variable Capacity Management: Not Supported 00:19:32.630 Delete Endurance Group: Not Supported 00:19:32.630 Delete NVM Set: Not Supported 00:19:32.630 Extended LBA Formats Supported: Not Supported 00:19:32.630 Flexible Data Placement Supported: Not Supported 00:19:32.630 00:19:32.630 Controller Memory Buffer Support 00:19:32.630 ================================ 00:19:32.630 Supported: No 00:19:32.630 00:19:32.630 Persistent Memory Region Support 00:19:32.630 ================================ 00:19:32.630 Supported: No 00:19:32.630 00:19:32.630 Admin Command Set Attributes 00:19:32.630 ============================ 00:19:32.630 Security Send/Receive: Not Supported 00:19:32.630 Format NVM: Not Supported 00:19:32.630 Firmware Activate/Download: Not Supported 00:19:32.630 Namespace Management: Not Supported 00:19:32.630 Device Self-Test: Not Supported 00:19:32.630 Directives: Not Supported 00:19:32.630 NVMe-MI: Not Supported 00:19:32.630 Virtualization Management: Not Supported 00:19:32.630 Doorbell Buffer Config: Not Supported 00:19:32.630 Get LBA Status Capability: Not Supported 00:19:32.630 Command & Feature Lockdown Capability: Not Supported 00:19:32.630 Abort Command Limit: 1 00:19:32.630 Async Event Request Limit: 4 00:19:32.630 Number of Firmware Slots: N/A 00:19:32.630 Firmware Slot 1 Read-Only: N/A 00:19:32.630 Fi[2024-04-25 18:14:30.384334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.630 [2024-04-25 18:14:30.384358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.630 [2024-04-25 18:14:30.384364] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.630 [2024-04-25 18:14:30.384368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7fc50) on tqpair=0x1d40270 00:19:32.630 rmware Activation Without Reset: N/A 00:19:32.630 Multiple Update Detection Support: N/A 00:19:32.630 Firmware Update Granularity: No Information Provided 00:19:32.630 Per-Namespace SMART Log: No 00:19:32.630 Asymmetric Namespace Access Log Page: Not Supported 00:19:32.630 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:32.630 Command Effects Log Page: Not Supported 00:19:32.630 Get Log Page Extended Data: Supported 00:19:32.630 Telemetry Log Pages: Not Supported 00:19:32.630 Persistent Event Log Pages: Not Supported 00:19:32.630 Supported Log Pages Log Page: May Support 00:19:32.630 Commands Supported & Effects Log Page: Not Supported 00:19:32.630 Feature Identifiers & Effects Log Page:May Support 00:19:32.630 NVMe-MI Commands & Effects Log Page: May Support 00:19:32.630 Data Area 4 for Telemetry Log: Not Supported 00:19:32.630 Error Log Page Entries Supported: 128 00:19:32.630 Keep Alive: Not Supported 00:19:32.630 00:19:32.630 NVM Command Set Attributes 00:19:32.630 ========================== 00:19:32.630 Submission Queue Entry Size 00:19:32.630 Max: 1 00:19:32.630 Min: 1 00:19:32.630 Completion Queue Entry Size 00:19:32.630 Max: 1 00:19:32.630 Min: 1 00:19:32.630 Number of Namespaces: 0 00:19:32.630 Compare Command: Not Supported 00:19:32.630 Write Uncorrectable Command: Not Supported 00:19:32.630 Dataset Management Command: Not Supported 00:19:32.630 Write Zeroes Command: Not Supported 00:19:32.630 Set Features Save Field: Not Supported 00:19:32.630 Reservations: Not Supported 00:19:32.630 Timestamp: Not Supported 00:19:32.630 Copy: Not Supported 00:19:32.630 Volatile Write Cache: Not Present 00:19:32.630 Atomic Write Unit (Normal): 1 00:19:32.630 Atomic Write Unit (PFail): 1 00:19:32.630 Atomic Compare & Write Unit: 1 00:19:32.630 Fused Compare & Write: Supported 00:19:32.630 Scatter-Gather List 00:19:32.630 SGL Command Set: Supported 00:19:32.630 SGL Keyed: Supported 00:19:32.630 SGL Bit Bucket Descriptor: Not Supported 00:19:32.630 SGL Metadata Pointer: Not Supported 00:19:32.630 Oversized SGL: Not Supported 00:19:32.630 SGL Metadata Address: Not Supported 00:19:32.630 SGL Offset: Supported 00:19:32.630 Transport SGL Data Block: Not Supported 00:19:32.630 Replay Protected Memory Block: Not Supported 00:19:32.630 00:19:32.630 Firmware Slot Information 00:19:32.630 ========================= 00:19:32.630 Active slot: 0 00:19:32.630 00:19:32.630 00:19:32.630 Error Log 00:19:32.630 ========= 00:19:32.630 00:19:32.630 Active Namespaces 00:19:32.630 ================= 00:19:32.630 Discovery Log Page 00:19:32.630 ================== 00:19:32.630 Generation Counter: 2 00:19:32.630 Number of Records: 2 00:19:32.630 Record Format: 0 00:19:32.630 00:19:32.630 Discovery Log Entry 0 00:19:32.630 ---------------------- 00:19:32.630 Transport Type: 3 (TCP) 00:19:32.630 Address Family: 1 (IPv4) 00:19:32.630 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:32.630 Entry Flags: 00:19:32.630 Duplicate Returned Information: 1 00:19:32.630 Explicit Persistent Connection Support for Discovery: 1 00:19:32.630 Transport Requirements: 00:19:32.630 Secure Channel: Not Required 00:19:32.630 Port ID: 0 (0x0000) 00:19:32.630 Controller ID: 65535 (0xffff) 00:19:32.630 Admin Max SQ Size: 128 00:19:32.630 Transport Service Identifier: 4420 00:19:32.630 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:32.630 Transport Address: 10.0.0.2 00:19:32.630 Discovery Log Entry 1 00:19:32.630 ---------------------- 00:19:32.630 Transport Type: 3 (TCP) 00:19:32.630 Address Family: 1 (IPv4) 00:19:32.631 Subsystem Type: 2 (NVM Subsystem) 00:19:32.631 Entry Flags: 00:19:32.631 Duplicate Returned Information: 0 00:19:32.631 Explicit Persistent Connection Support for Discovery: 0 00:19:32.631 Transport Requirements: 00:19:32.631 Secure Channel: Not Required 00:19:32.631 Port ID: 0 (0x0000) 00:19:32.631 Controller ID: 65535 (0xffff) 00:19:32.631 Admin Max SQ Size: 128 00:19:32.631 Transport Service Identifier: 4420 00:19:32.631 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:32.631 Transport Address: 10.0.0.2 [2024-04-25 18:14:30.384499] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:32.631 [2024-04-25 18:14:30.384519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.631 [2024-04-25 18:14:30.384527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.631 [2024-04-25 18:14:30.384533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.631 [2024-04-25 18:14:30.384539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.631 [2024-04-25 18:14:30.384555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.384572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.384601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.384696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.384704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.384708] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.384721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.384736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.384765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.384834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.384842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.384845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.384855] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:32.631 [2024-04-25 18:14:30.384860] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:32.631 [2024-04-25 18:14:30.384870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.384885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.384908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.384964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.384970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.384976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.384992] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.384997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385429] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385666] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385751] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.631 [2024-04-25 18:14:30.385777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.631 [2024-04-25 18:14:30.385792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.631 [2024-04-25 18:14:30.385816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.631 [2024-04-25 18:14:30.385865] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.631 [2024-04-25 18:14:30.385872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.631 [2024-04-25 18:14:30.385876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.631 [2024-04-25 18:14:30.385880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.385891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.385895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.385899] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.632 [2024-04-25 18:14:30.385906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.632 [2024-04-25 18:14:30.385928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.632 [2024-04-25 18:14:30.385983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.632 [2024-04-25 18:14:30.385990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.632 [2024-04-25 18:14:30.385994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.385997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.386008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.632 [2024-04-25 18:14:30.386024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.632 [2024-04-25 18:14:30.386046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.632 [2024-04-25 18:14:30.386099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.632 [2024-04-25 18:14:30.386107] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.632 [2024-04-25 18:14:30.386111] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386115] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.386126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.632 [2024-04-25 18:14:30.386142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.632 [2024-04-25 18:14:30.386163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.632 [2024-04-25 18:14:30.386217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.632 [2024-04-25 18:14:30.386225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.632 [2024-04-25 18:14:30.386228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.386244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.386252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.632 [2024-04-25 18:14:30.386259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.632 [2024-04-25 18:14:30.386282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.632 [2024-04-25 18:14:30.390329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.632 [2024-04-25 18:14:30.390341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.632 [2024-04-25 18:14:30.390344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.390349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.390363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.390369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.390373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d40270) 00:19:32.632 [2024-04-25 18:14:30.390381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.632 [2024-04-25 18:14:30.390410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d7faf0, cid 3, qid 0 00:19:32.632 [2024-04-25 18:14:30.390468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.632 [2024-04-25 18:14:30.390476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.632 [2024-04-25 18:14:30.390480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.632 [2024-04-25 18:14:30.390484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d7faf0) on tqpair=0x1d40270 00:19:32.632 [2024-04-25 18:14:30.390493] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:32.632 00:19:32.632 18:14:30 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:32.632 [2024-04-25 18:14:30.422776] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:32.632 [2024-04-25 18:14:30.422827] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81107 ] 00:19:32.632 [2024-04-25 18:14:30.553747] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:32.632 [2024-04-25 18:14:30.553819] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:32.632 [2024-04-25 18:14:30.553827] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:32.632 [2024-04-25 18:14:30.553841] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:32.632 [2024-04-25 18:14:30.553851] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:32.632 [2024-04-25 18:14:30.553965] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:32.632 [2024-04-25 18:14:30.554013] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e8a270 0 00:19:32.895 [2024-04-25 18:14:30.568319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:32.895 [2024-04-25 18:14:30.568343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:32.895 [2024-04-25 18:14:30.568349] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:32.895 [2024-04-25 18:14:30.568353] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:32.895 [2024-04-25 18:14:30.568403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.568412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.568417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.568428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:32.895 [2024-04-25 18:14:30.568464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.576324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.576346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.576352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.576368] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:32.895 [2024-04-25 18:14:30.576377] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:32.895 [2024-04-25 18:14:30.576384] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:32.895 [2024-04-25 18:14:30.576405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.576424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.895 [2024-04-25 18:14:30.576458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.576528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.576536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.576540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576544] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.576556] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:32.895 [2024-04-25 18:14:30.576582] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:32.895 [2024-04-25 18:14:30.576591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.576599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.576607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.895 [2024-04-25 18:14:30.576643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.577002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.577018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.577024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.577035] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:32.895 [2024-04-25 18:14:30.577044] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.577053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.577069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.895 [2024-04-25 18:14:30.577094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.577147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.577154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.577158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.577168] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.577212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.577232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.895 [2024-04-25 18:14:30.577258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.577729] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.577747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.577752] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.577761] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:32.895 [2024-04-25 18:14:30.577767] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.577776] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.577882] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:32.895 [2024-04-25 18:14:30.577887] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.577896] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.577904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.577911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.895 [2024-04-25 18:14:30.577938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.895 [2024-04-25 18:14:30.578323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.895 [2024-04-25 18:14:30.578338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.895 [2024-04-25 18:14:30.578343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.578347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.895 [2024-04-25 18:14:30.578353] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:32.895 [2024-04-25 18:14:30.578364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.578370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.895 [2024-04-25 18:14:30.578373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.895 [2024-04-25 18:14:30.578381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.896 [2024-04-25 18:14:30.578406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.896 [2024-04-25 18:14:30.578480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.896 [2024-04-25 18:14:30.578488] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.896 [2024-04-25 18:14:30.578491] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.578495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.896 [2024-04-25 18:14:30.578501] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:32.896 [2024-04-25 18:14:30.578506] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.578514] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:32.896 [2024-04-25 18:14:30.578526] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.578537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.578542] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.578545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.578553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.896 [2024-04-25 18:14:30.578578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.896 [2024-04-25 18:14:30.578980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.896 [2024-04-25 18:14:30.578996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.896 [2024-04-25 18:14:30.579001] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579005] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=4096, cccid=0 00:19:32.896 [2024-04-25 18:14:30.579009] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec96d0) on tqpair(0x1e8a270): expected_datao=0, payload_size=4096 00:19:32.896 [2024-04-25 18:14:30.579017] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579022] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.896 [2024-04-25 18:14:30.579037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.896 [2024-04-25 18:14:30.579041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.896 [2024-04-25 18:14:30.579054] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:32.896 [2024-04-25 18:14:30.579065] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:32.896 [2024-04-25 18:14:30.579071] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:32.896 [2024-04-25 18:14:30.579076] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:32.896 [2024-04-25 18:14:30.579080] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:32.896 [2024-04-25 18:14:30.579086] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.579095] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.579103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.896 [2024-04-25 18:14:30.579145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.896 [2024-04-25 18:14:30.579619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.896 [2024-04-25 18:14:30.579635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.896 [2024-04-25 18:14:30.579640] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579644] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec96d0) on tqpair=0x1e8a270 00:19:32.896 [2024-04-25 18:14:30.579653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.896 [2024-04-25 18:14:30.579675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579679] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.896 [2024-04-25 18:14:30.579694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.896 [2024-04-25 18:14:30.579713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.896 [2024-04-25 18:14:30.579731] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.579750] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.579759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579763] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.579767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.579773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.896 [2024-04-25 18:14:30.579813] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec96d0, cid 0, qid 0 00:19:32.896 [2024-04-25 18:14:30.579821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9830, cid 1, qid 0 00:19:32.896 [2024-04-25 18:14:30.579826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9990, cid 2, qid 0 00:19:32.896 [2024-04-25 18:14:30.579830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.896 [2024-04-25 18:14:30.579835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.896 [2024-04-25 18:14:30.584321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.896 [2024-04-25 18:14:30.584342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.896 [2024-04-25 18:14:30.584347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.896 [2024-04-25 18:14:30.584358] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:32.896 [2024-04-25 18:14:30.584365] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.584375] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.584383] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.584391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.584407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.896 [2024-04-25 18:14:30.584437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.896 [2024-04-25 18:14:30.584508] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.896 [2024-04-25 18:14:30.584531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.896 [2024-04-25 18:14:30.584536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.896 [2024-04-25 18:14:30.584587] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.584599] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:32.896 [2024-04-25 18:14:30.584608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.896 [2024-04-25 18:14:30.584623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.896 [2024-04-25 18:14:30.584649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.896 [2024-04-25 18:14:30.584959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.896 [2024-04-25 18:14:30.584974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.896 [2024-04-25 18:14:30.584979] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.584983] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=4096, cccid=4 00:19:32.896 [2024-04-25 18:14:30.584988] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9c50) on tqpair(0x1e8a270): expected_datao=0, payload_size=4096 00:19:32.896 [2024-04-25 18:14:30.584996] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.896 [2024-04-25 18:14:30.585000] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.585016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.585020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.585044] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:32.897 [2024-04-25 18:14:30.585059] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.585072] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.585080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585084] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.585098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.585126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.897 [2024-04-25 18:14:30.585655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.897 [2024-04-25 18:14:30.585689] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.897 [2024-04-25 18:14:30.585694] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585698] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=4096, cccid=4 00:19:32.897 [2024-04-25 18:14:30.585703] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9c50) on tqpair(0x1e8a270): expected_datao=0, payload_size=4096 00:19:32.897 [2024-04-25 18:14:30.585711] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585715] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.585739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.585742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.585766] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.585779] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.585789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.585797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.585805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.585832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.897 [2024-04-25 18:14:30.586019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.897 [2024-04-25 18:14:30.586027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.897 [2024-04-25 18:14:30.586031] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586034] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=4096, cccid=4 00:19:32.897 [2024-04-25 18:14:30.586039] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9c50) on tqpair(0x1e8a270): expected_datao=0, payload_size=4096 00:19:32.897 [2024-04-25 18:14:30.586046] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586050] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.586293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.586300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.586314] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586325] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586340] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586347] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586353] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586358] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:32.897 [2024-04-25 18:14:30.586363] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:32.897 [2024-04-25 18:14:30.586368] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:32.897 [2024-04-25 18:14:30.586384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.586399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.586406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.586419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.897 [2024-04-25 18:14:30.586452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.897 [2024-04-25 18:14:30.586462] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9db0, cid 5, qid 0 00:19:32.897 [2024-04-25 18:14:30.586847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.586862] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.586867] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.586879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.586885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.586888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9db0) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.586904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.586912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.586919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.586945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9db0, cid 5, qid 0 00:19:32.897 [2024-04-25 18:14:30.587007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.587014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.587018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9db0) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.587033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.587048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.587071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9db0, cid 5, qid 0 00:19:32.897 [2024-04-25 18:14:30.587450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.587465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.587470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587474] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9db0) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.587486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587491] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587494] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.587501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.587527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9db0, cid 5, qid 0 00:19:32.897 [2024-04-25 18:14:30.587601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.897 [2024-04-25 18:14:30.587608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.897 [2024-04-25 18:14:30.587612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9db0) on tqpair=0x1e8a270 00:19:32.897 [2024-04-25 18:14:30.587631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8a270) 00:19:32.897 [2024-04-25 18:14:30.587647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.897 [2024-04-25 18:14:30.587654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.897 [2024-04-25 18:14:30.587662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8a270) 00:19:32.898 [2024-04-25 18:14:30.587668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.898 [2024-04-25 18:14:30.587676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.587680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.587684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e8a270) 00:19:32.898 [2024-04-25 18:14:30.587690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.898 [2024-04-25 18:14:30.587697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.587701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.587704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e8a270) 00:19:32.898 [2024-04-25 18:14:30.587710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.898 [2024-04-25 18:14:30.587736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9db0, cid 5, qid 0 00:19:32.898 [2024-04-25 18:14:30.587744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9c50, cid 4, qid 0 00:19:32.898 [2024-04-25 18:14:30.587748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9f10, cid 6, qid 0 00:19:32.898 [2024-04-25 18:14:30.587753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eca070, cid 7, qid 0 00:19:32.898 [2024-04-25 18:14:30.588217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.898 [2024-04-25 18:14:30.588232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.898 [2024-04-25 18:14:30.588237] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.588240] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=8192, cccid=5 00:19:32.898 [2024-04-25 18:14:30.588245] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9db0) on tqpair(0x1e8a270): expected_datao=0, payload_size=8192 00:19:32.898 [2024-04-25 18:14:30.588264] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592313] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.898 [2024-04-25 18:14:30.592342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.898 [2024-04-25 18:14:30.592353] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592356] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=512, cccid=4 00:19:32.898 [2024-04-25 18:14:30.592361] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9c50) on tqpair(0x1e8a270): expected_datao=0, payload_size=512 00:19:32.898 [2024-04-25 18:14:30.592369] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592373] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.898 [2024-04-25 18:14:30.592384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.898 [2024-04-25 18:14:30.592388] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592392] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=512, cccid=6 00:19:32.898 [2024-04-25 18:14:30.592397] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec9f10) on tqpair(0x1e8a270): expected_datao=0, payload_size=512 00:19:32.898 [2024-04-25 18:14:30.592403] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592407] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:32.898 [2024-04-25 18:14:30.592419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:32.898 [2024-04-25 18:14:30.592423] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592427] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8a270): datao=0, datal=4096, cccid=7 00:19:32.898 [2024-04-25 18:14:30.592431] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eca070) on tqpair(0x1e8a270): expected_datao=0, payload_size=4096 00:19:32.898 [2024-04-25 18:14:30.592438] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592442] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.898 [2024-04-25 18:14:30.592453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.898 [2024-04-25 18:14:30.592457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9db0) on tqpair=0x1e8a270 00:19:32.898 [2024-04-25 18:14:30.592482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.898 [2024-04-25 18:14:30.592491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.898 [2024-04-25 18:14:30.592494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9c50) on tqpair=0x1e8a270 00:19:32.898 [2024-04-25 18:14:30.592510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.898 [2024-04-25 18:14:30.592517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.898 [2024-04-25 18:14:30.592521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9f10) on tqpair=0x1e8a270 00:19:32.898 [2024-04-25 18:14:30.592548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.898 [2024-04-25 18:14:30.592555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.898 [2024-04-25 18:14:30.592574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.898 [2024-04-25 18:14:30.592579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eca070) on tqpair=0x1e8a270 00:19:32.898 ===================================================== 00:19:32.898 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.898 ===================================================== 00:19:32.898 Controller Capabilities/Features 00:19:32.898 ================================ 00:19:32.898 Vendor ID: 8086 00:19:32.898 Subsystem Vendor ID: 8086 00:19:32.898 Serial Number: SPDK00000000000001 00:19:32.898 Model Number: SPDK bdev Controller 00:19:32.898 Firmware Version: 24.01.1 00:19:32.898 Recommended Arb Burst: 6 00:19:32.898 IEEE OUI Identifier: e4 d2 5c 00:19:32.898 Multi-path I/O 00:19:32.898 May have multiple subsystem ports: Yes 00:19:32.898 May have multiple controllers: Yes 00:19:32.898 Associated with SR-IOV VF: No 00:19:32.898 Max Data Transfer Size: 131072 00:19:32.898 Max Number of Namespaces: 32 00:19:32.898 Max Number of I/O Queues: 127 00:19:32.898 NVMe Specification Version (VS): 1.3 00:19:32.898 NVMe Specification Version (Identify): 1.3 00:19:32.898 Maximum Queue Entries: 128 00:19:32.898 Contiguous Queues Required: Yes 00:19:32.898 Arbitration Mechanisms Supported 00:19:32.898 Weighted Round Robin: Not Supported 00:19:32.898 Vendor Specific: Not Supported 00:19:32.898 Reset Timeout: 15000 ms 00:19:32.898 Doorbell Stride: 4 bytes 00:19:32.898 NVM Subsystem Reset: Not Supported 00:19:32.898 Command Sets Supported 00:19:32.898 NVM Command Set: Supported 00:19:32.898 Boot Partition: Not Supported 00:19:32.898 Memory Page Size Minimum: 4096 bytes 00:19:32.898 Memory Page Size Maximum: 4096 bytes 00:19:32.898 Persistent Memory Region: Not Supported 00:19:32.898 Optional Asynchronous Events Supported 00:19:32.898 Namespace Attribute Notices: Supported 00:19:32.898 Firmware Activation Notices: Not Supported 00:19:32.898 ANA Change Notices: Not Supported 00:19:32.898 PLE Aggregate Log Change Notices: Not Supported 00:19:32.898 LBA Status Info Alert Notices: Not Supported 00:19:32.898 EGE Aggregate Log Change Notices: Not Supported 00:19:32.898 Normal NVM Subsystem Shutdown event: Not Supported 00:19:32.898 Zone Descriptor Change Notices: Not Supported 00:19:32.898 Discovery Log Change Notices: Not Supported 00:19:32.898 Controller Attributes 00:19:32.898 128-bit Host Identifier: Supported 00:19:32.898 Non-Operational Permissive Mode: Not Supported 00:19:32.898 NVM Sets: Not Supported 00:19:32.898 Read Recovery Levels: Not Supported 00:19:32.898 Endurance Groups: Not Supported 00:19:32.898 Predictable Latency Mode: Not Supported 00:19:32.898 Traffic Based Keep ALive: Not Supported 00:19:32.898 Namespace Granularity: Not Supported 00:19:32.898 SQ Associations: Not Supported 00:19:32.898 UUID List: Not Supported 00:19:32.898 Multi-Domain Subsystem: Not Supported 00:19:32.898 Fixed Capacity Management: Not Supported 00:19:32.898 Variable Capacity Management: Not Supported 00:19:32.898 Delete Endurance Group: Not Supported 00:19:32.898 Delete NVM Set: Not Supported 00:19:32.898 Extended LBA Formats Supported: Not Supported 00:19:32.898 Flexible Data Placement Supported: Not Supported 00:19:32.898 00:19:32.898 Controller Memory Buffer Support 00:19:32.898 ================================ 00:19:32.898 Supported: No 00:19:32.898 00:19:32.898 Persistent Memory Region Support 00:19:32.898 ================================ 00:19:32.898 Supported: No 00:19:32.898 00:19:32.898 Admin Command Set Attributes 00:19:32.898 ============================ 00:19:32.898 Security Send/Receive: Not Supported 00:19:32.898 Format NVM: Not Supported 00:19:32.898 Firmware Activate/Download: Not Supported 00:19:32.898 Namespace Management: Not Supported 00:19:32.898 Device Self-Test: Not Supported 00:19:32.898 Directives: Not Supported 00:19:32.898 NVMe-MI: Not Supported 00:19:32.898 Virtualization Management: Not Supported 00:19:32.898 Doorbell Buffer Config: Not Supported 00:19:32.898 Get LBA Status Capability: Not Supported 00:19:32.898 Command & Feature Lockdown Capability: Not Supported 00:19:32.898 Abort Command Limit: 4 00:19:32.898 Async Event Request Limit: 4 00:19:32.898 Number of Firmware Slots: N/A 00:19:32.898 Firmware Slot 1 Read-Only: N/A 00:19:32.898 Firmware Activation Without Reset: N/A 00:19:32.899 Multiple Update Detection Support: N/A 00:19:32.899 Firmware Update Granularity: No Information Provided 00:19:32.899 Per-Namespace SMART Log: No 00:19:32.899 Asymmetric Namespace Access Log Page: Not Supported 00:19:32.899 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:32.899 Command Effects Log Page: Supported 00:19:32.899 Get Log Page Extended Data: Supported 00:19:32.899 Telemetry Log Pages: Not Supported 00:19:32.899 Persistent Event Log Pages: Not Supported 00:19:32.899 Supported Log Pages Log Page: May Support 00:19:32.899 Commands Supported & Effects Log Page: Not Supported 00:19:32.899 Feature Identifiers & Effects Log Page:May Support 00:19:32.899 NVMe-MI Commands & Effects Log Page: May Support 00:19:32.899 Data Area 4 for Telemetry Log: Not Supported 00:19:32.899 Error Log Page Entries Supported: 128 00:19:32.899 Keep Alive: Supported 00:19:32.899 Keep Alive Granularity: 10000 ms 00:19:32.899 00:19:32.899 NVM Command Set Attributes 00:19:32.899 ========================== 00:19:32.899 Submission Queue Entry Size 00:19:32.899 Max: 64 00:19:32.899 Min: 64 00:19:32.899 Completion Queue Entry Size 00:19:32.899 Max: 16 00:19:32.899 Min: 16 00:19:32.899 Number of Namespaces: 32 00:19:32.899 Compare Command: Supported 00:19:32.899 Write Uncorrectable Command: Not Supported 00:19:32.899 Dataset Management Command: Supported 00:19:32.899 Write Zeroes Command: Supported 00:19:32.899 Set Features Save Field: Not Supported 00:19:32.899 Reservations: Supported 00:19:32.899 Timestamp: Not Supported 00:19:32.899 Copy: Supported 00:19:32.899 Volatile Write Cache: Present 00:19:32.899 Atomic Write Unit (Normal): 1 00:19:32.899 Atomic Write Unit (PFail): 1 00:19:32.899 Atomic Compare & Write Unit: 1 00:19:32.899 Fused Compare & Write: Supported 00:19:32.899 Scatter-Gather List 00:19:32.899 SGL Command Set: Supported 00:19:32.899 SGL Keyed: Supported 00:19:32.899 SGL Bit Bucket Descriptor: Not Supported 00:19:32.899 SGL Metadata Pointer: Not Supported 00:19:32.899 Oversized SGL: Not Supported 00:19:32.899 SGL Metadata Address: Not Supported 00:19:32.899 SGL Offset: Supported 00:19:32.899 Transport SGL Data Block: Not Supported 00:19:32.899 Replay Protected Memory Block: Not Supported 00:19:32.899 00:19:32.899 Firmware Slot Information 00:19:32.899 ========================= 00:19:32.899 Active slot: 1 00:19:32.899 Slot 1 Firmware Revision: 24.01.1 00:19:32.899 00:19:32.899 00:19:32.899 Commands Supported and Effects 00:19:32.899 ============================== 00:19:32.899 Admin Commands 00:19:32.899 -------------- 00:19:32.899 Get Log Page (02h): Supported 00:19:32.899 Identify (06h): Supported 00:19:32.899 Abort (08h): Supported 00:19:32.899 Set Features (09h): Supported 00:19:32.899 Get Features (0Ah): Supported 00:19:32.899 Asynchronous Event Request (0Ch): Supported 00:19:32.899 Keep Alive (18h): Supported 00:19:32.899 I/O Commands 00:19:32.899 ------------ 00:19:32.899 Flush (00h): Supported LBA-Change 00:19:32.899 Write (01h): Supported LBA-Change 00:19:32.899 Read (02h): Supported 00:19:32.899 Compare (05h): Supported 00:19:32.899 Write Zeroes (08h): Supported LBA-Change 00:19:32.899 Dataset Management (09h): Supported LBA-Change 00:19:32.899 Copy (19h): Supported LBA-Change 00:19:32.899 Unknown (79h): Supported LBA-Change 00:19:32.899 Unknown (7Ah): Supported 00:19:32.899 00:19:32.899 Error Log 00:19:32.899 ========= 00:19:32.899 00:19:32.899 Arbitration 00:19:32.899 =========== 00:19:32.899 Arbitration Burst: 1 00:19:32.899 00:19:32.899 Power Management 00:19:32.899 ================ 00:19:32.899 Number of Power States: 1 00:19:32.899 Current Power State: Power State #0 00:19:32.899 Power State #0: 00:19:32.899 Max Power: 0.00 W 00:19:32.899 Non-Operational State: Operational 00:19:32.899 Entry Latency: Not Reported 00:19:32.899 Exit Latency: Not Reported 00:19:32.899 Relative Read Throughput: 0 00:19:32.899 Relative Read Latency: 0 00:19:32.899 Relative Write Throughput: 0 00:19:32.899 Relative Write Latency: 0 00:19:32.899 Idle Power: Not Reported 00:19:32.899 Active Power: Not Reported 00:19:32.899 Non-Operational Permissive Mode: Not Supported 00:19:32.899 00:19:32.899 Health Information 00:19:32.899 ================== 00:19:32.899 Critical Warnings: 00:19:32.899 Available Spare Space: OK 00:19:32.899 Temperature: OK 00:19:32.899 Device Reliability: OK 00:19:32.899 Read Only: No 00:19:32.899 Volatile Memory Backup: OK 00:19:32.899 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:32.899 Temperature Threshold: [2024-04-25 18:14:30.592689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.592697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.592700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e8a270) 00:19:32.899 [2024-04-25 18:14:30.592708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.899 [2024-04-25 18:14:30.592741] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eca070, cid 7, qid 0 00:19:32.899 [2024-04-25 18:14:30.592897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.899 [2024-04-25 18:14:30.592905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.899 [2024-04-25 18:14:30.592909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.592913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eca070) on tqpair=0x1e8a270 00:19:32.899 [2024-04-25 18:14:30.592952] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:32.899 [2024-04-25 18:14:30.592968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.899 [2024-04-25 18:14:30.592976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.899 [2024-04-25 18:14:30.592982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.899 [2024-04-25 18:14:30.592988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.899 [2024-04-25 18:14:30.592999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.593003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.593007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.899 [2024-04-25 18:14:30.593014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.899 [2024-04-25 18:14:30.593041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.899 [2024-04-25 18:14:30.593359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.899 [2024-04-25 18:14:30.593377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.899 [2024-04-25 18:14:30.593382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.593386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.899 [2024-04-25 18:14:30.593396] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.593401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.899 [2024-04-25 18:14:30.593404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.899 [2024-04-25 18:14:30.593412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.593444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.593604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.593620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.593625] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593630] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.593637] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:32.900 [2024-04-25 18:14:30.593642] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:32.900 [2024-04-25 18:14:30.593653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.593669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.593718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.593919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.593934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.593938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.593955] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.593964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.593971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.593995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.594072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.594079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.594083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.594098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.594113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.594132] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.594515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.594530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.594535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.594551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.594567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.594594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.594898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.594912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.594917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.594933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.594942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.594949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.594973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.595156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.595170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.595175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.595191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.595207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.595232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.595534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.595550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.595554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.595571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.595587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.595612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.595872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.595886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.595891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.595907] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.595916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.595923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.595947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.596226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.596241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.596245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.596249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.596262] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.596267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.600303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8a270) 00:19:32.900 [2024-04-25 18:14:30.600319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.900 [2024-04-25 18:14:30.600352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec9af0, cid 3, qid 0 00:19:32.900 [2024-04-25 18:14:30.600427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:32.900 [2024-04-25 18:14:30.600435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:32.900 [2024-04-25 18:14:30.600439] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:32.900 [2024-04-25 18:14:30.600443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ec9af0) on tqpair=0x1e8a270 00:19:32.900 [2024-04-25 18:14:30.600453] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:32.900 0 Kelvin (-273 Celsius) 00:19:32.900 Available Spare: 0% 00:19:32.900 Available Spare Threshold: 0% 00:19:32.900 Life Percentage Used: 0% 00:19:32.900 Data Units Read: 0 00:19:32.900 Data Units Written: 0 00:19:32.900 Host Read Commands: 0 00:19:32.900 Host Write Commands: 0 00:19:32.900 Controller Busy Time: 0 minutes 00:19:32.900 Power Cycles: 0 00:19:32.900 Power On Hours: 0 hours 00:19:32.900 Unsafe Shutdowns: 0 00:19:32.900 Unrecoverable Media Errors: 0 00:19:32.900 Lifetime Error Log Entries: 0 00:19:32.900 Warning Temperature Time: 0 minutes 00:19:32.900 Critical Temperature Time: 0 minutes 00:19:32.900 00:19:32.900 Number of Queues 00:19:32.900 ================ 00:19:32.900 Number of I/O Submission Queues: 127 00:19:32.900 Number of I/O Completion Queues: 127 00:19:32.900 00:19:32.900 Active Namespaces 00:19:32.900 ================= 00:19:32.900 Namespace ID:1 00:19:32.900 Error Recovery Timeout: Unlimited 00:19:32.900 Command Set Identifier: NVM (00h) 00:19:32.900 Deallocate: Supported 00:19:32.900 Deallocated/Unwritten Error: Not Supported 00:19:32.900 Deallocated Read Value: Unknown 00:19:32.900 Deallocate in Write Zeroes: Not Supported 00:19:32.900 Deallocated Guard Field: 0xFFFF 00:19:32.900 Flush: Supported 00:19:32.900 Reservation: Supported 00:19:32.900 Namespace Sharing Capabilities: Multiple Controllers 00:19:32.900 Size (in LBAs): 131072 (0GiB) 00:19:32.901 Capacity (in LBAs): 131072 (0GiB) 00:19:32.901 Utilization (in LBAs): 131072 (0GiB) 00:19:32.901 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:32.901 EUI64: ABCDEF0123456789 00:19:32.901 UUID: 124930f9-91ea-4d45-b079-459a2b9ef550 00:19:32.901 Thin Provisioning: Not Supported 00:19:32.901 Per-NS Atomic Units: Yes 00:19:32.901 Atomic Boundary Size (Normal): 0 00:19:32.901 Atomic Boundary Size (PFail): 0 00:19:32.901 Atomic Boundary Offset: 0 00:19:32.901 Maximum Single Source Range Length: 65535 00:19:32.901 Maximum Copy Length: 65535 00:19:32.901 Maximum Source Range Count: 1 00:19:32.901 NGUID/EUI64 Never Reused: No 00:19:32.901 Namespace Write Protected: No 00:19:32.901 Number of LBA Formats: 1 00:19:32.901 Current LBA Format: LBA Format #00 00:19:32.901 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:32.901 00:19:32.901 18:14:30 -- host/identify.sh@51 -- # sync 00:19:32.901 18:14:30 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.901 18:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.901 18:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:32.901 18:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.901 18:14:30 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:32.901 18:14:30 -- host/identify.sh@56 -- # nvmftestfini 00:19:32.901 18:14:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.901 18:14:30 -- nvmf/common.sh@116 -- # sync 00:19:32.901 18:14:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.901 18:14:30 -- nvmf/common.sh@119 -- # set +e 00:19:32.901 18:14:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.901 18:14:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.901 rmmod nvme_tcp 00:19:32.901 rmmod nvme_fabrics 00:19:32.901 rmmod nvme_keyring 00:19:32.901 18:14:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.901 18:14:30 -- nvmf/common.sh@123 -- # set -e 00:19:32.901 18:14:30 -- nvmf/common.sh@124 -- # return 0 00:19:32.901 18:14:30 -- nvmf/common.sh@477 -- # '[' -n 81046 ']' 00:19:32.901 18:14:30 -- nvmf/common.sh@478 -- # killprocess 81046 00:19:32.901 18:14:30 -- common/autotest_common.sh@926 -- # '[' -z 81046 ']' 00:19:32.901 18:14:30 -- common/autotest_common.sh@930 -- # kill -0 81046 00:19:32.901 18:14:30 -- common/autotest_common.sh@931 -- # uname 00:19:32.901 18:14:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.901 18:14:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81046 00:19:32.901 18:14:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:32.901 18:14:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:32.901 killing process with pid 81046 00:19:32.901 18:14:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81046' 00:19:32.901 18:14:30 -- common/autotest_common.sh@945 -- # kill 81046 00:19:32.901 [2024-04-25 18:14:30.742486] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:32.901 18:14:30 -- common/autotest_common.sh@950 -- # wait 81046 00:19:33.468 18:14:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.468 18:14:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:33.468 18:14:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:33.468 18:14:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.468 18:14:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:33.468 18:14:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.468 18:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.469 18:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.469 18:14:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:33.469 00:19:33.469 real 0m2.647s 00:19:33.469 user 0m7.213s 00:19:33.469 sys 0m0.700s 00:19:33.469 18:14:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.469 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.469 ************************************ 00:19:33.469 END TEST nvmf_identify 00:19:33.469 ************************************ 00:19:33.469 18:14:31 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:33.469 18:14:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:33.469 18:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.469 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.469 ************************************ 00:19:33.469 START TEST nvmf_perf 00:19:33.469 ************************************ 00:19:33.469 18:14:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:33.469 * Looking for test storage... 00:19:33.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.469 18:14:31 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.469 18:14:31 -- nvmf/common.sh@7 -- # uname -s 00:19:33.469 18:14:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.469 18:14:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.469 18:14:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.469 18:14:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.469 18:14:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.469 18:14:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.469 18:14:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.469 18:14:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.469 18:14:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.469 18:14:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:33.469 18:14:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:19:33.469 18:14:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.469 18:14:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.469 18:14:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.469 18:14:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.469 18:14:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.469 18:14:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.469 18:14:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.469 18:14:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.469 18:14:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.469 18:14:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.469 18:14:31 -- paths/export.sh@5 -- # export PATH 00:19:33.469 18:14:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.469 18:14:31 -- nvmf/common.sh@46 -- # : 0 00:19:33.469 18:14:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.469 18:14:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.469 18:14:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.469 18:14:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.469 18:14:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.469 18:14:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.469 18:14:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.469 18:14:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.469 18:14:31 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:33.469 18:14:31 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:33.469 18:14:31 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.469 18:14:31 -- host/perf.sh@17 -- # nvmftestinit 00:19:33.469 18:14:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.469 18:14:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.469 18:14:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.469 18:14:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.469 18:14:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.469 18:14:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.469 18:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.469 18:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.469 18:14:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:33.469 18:14:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:33.469 18:14:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.469 18:14:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.469 18:14:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:33.469 18:14:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:33.469 18:14:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.469 18:14:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.469 18:14:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.469 18:14:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.469 18:14:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.469 18:14:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.469 18:14:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.469 18:14:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.469 18:14:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:33.469 18:14:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:33.469 Cannot find device "nvmf_tgt_br" 00:19:33.469 18:14:31 -- nvmf/common.sh@154 -- # true 00:19:33.469 18:14:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.469 Cannot find device "nvmf_tgt_br2" 00:19:33.469 18:14:31 -- nvmf/common.sh@155 -- # true 00:19:33.469 18:14:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:33.469 18:14:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:33.469 Cannot find device "nvmf_tgt_br" 00:19:33.469 18:14:31 -- nvmf/common.sh@157 -- # true 00:19:33.469 18:14:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:33.469 Cannot find device "nvmf_tgt_br2" 00:19:33.469 18:14:31 -- nvmf/common.sh@158 -- # true 00:19:33.469 18:14:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:33.469 18:14:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:33.728 18:14:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.728 18:14:31 -- nvmf/common.sh@161 -- # true 00:19:33.728 18:14:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.728 18:14:31 -- nvmf/common.sh@162 -- # true 00:19:33.728 18:14:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.728 18:14:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.728 18:14:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.728 18:14:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.728 18:14:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.728 18:14:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.728 18:14:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.728 18:14:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:33.728 18:14:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:33.728 18:14:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:33.728 18:14:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:33.728 18:14:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:33.728 18:14:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:33.728 18:14:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.728 18:14:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.728 18:14:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.728 18:14:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:33.728 18:14:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:33.728 18:14:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.728 18:14:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.728 18:14:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.728 18:14:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.728 18:14:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.728 18:14:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:33.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:19:33.728 00:19:33.728 --- 10.0.0.2 ping statistics --- 00:19:33.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.728 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:33.728 18:14:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:33.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:33.728 00:19:33.728 --- 10.0.0.3 ping statistics --- 00:19:33.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.728 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:33.728 18:14:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:33.728 00:19:33.728 --- 10.0.0.1 ping statistics --- 00:19:33.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.728 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:33.728 18:14:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.728 18:14:31 -- nvmf/common.sh@421 -- # return 0 00:19:33.728 18:14:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:33.728 18:14:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.728 18:14:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:33.728 18:14:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:33.728 18:14:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.728 18:14:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:33.728 18:14:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:33.728 18:14:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:33.728 18:14:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:33.728 18:14:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:33.728 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.728 18:14:31 -- nvmf/common.sh@469 -- # nvmfpid=81272 00:19:33.728 18:14:31 -- nvmf/common.sh@470 -- # waitforlisten 81272 00:19:33.728 18:14:31 -- common/autotest_common.sh@819 -- # '[' -z 81272 ']' 00:19:33.728 18:14:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.728 18:14:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.728 18:14:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:33.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.728 18:14:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.729 18:14:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:33.729 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.987 [2024-04-25 18:14:31.707730] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:33.987 [2024-04-25 18:14:31.707802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.987 [2024-04-25 18:14:31.841975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.245 [2024-04-25 18:14:31.961562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.245 [2024-04-25 18:14:31.961733] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.245 [2024-04-25 18:14:31.961747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.245 [2024-04-25 18:14:31.961756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.245 [2024-04-25 18:14:31.961903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.245 [2024-04-25 18:14:31.963147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.245 [2024-04-25 18:14:31.963320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.245 [2024-04-25 18:14:31.963334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.812 18:14:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:34.812 18:14:32 -- common/autotest_common.sh@852 -- # return 0 00:19:34.812 18:14:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:34.812 18:14:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:34.812 18:14:32 -- common/autotest_common.sh@10 -- # set +x 00:19:35.071 18:14:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.071 18:14:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:35.071 18:14:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:35.331 18:14:33 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:35.331 18:14:33 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:35.589 18:14:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:19:35.589 18:14:33 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:35.847 18:14:33 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:35.847 18:14:33 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:19:35.847 18:14:33 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:35.847 18:14:33 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:35.847 18:14:33 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.105 [2024-04-25 18:14:33.878641] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.105 18:14:33 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.363 18:14:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:36.363 18:14:34 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.621 18:14:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:36.621 18:14:34 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:36.879 18:14:34 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.138 [2024-04-25 18:14:34.839851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.138 18:14:34 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:37.138 18:14:35 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:37.138 18:14:35 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:37.138 18:14:35 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:37.138 18:14:35 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:38.541 Initializing NVMe Controllers 00:19:38.541 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:19:38.541 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:19:38.541 Initialization complete. Launching workers. 00:19:38.541 ======================================================== 00:19:38.541 Latency(us) 00:19:38.541 Device Information : IOPS MiB/s Average min max 00:19:38.541 PCIE (0000:00:06.0) NSID 1 from core 0: 22528.00 88.00 1419.66 387.05 7967.08 00:19:38.541 ======================================================== 00:19:38.541 Total : 22528.00 88.00 1419.66 387.05 7967.08 00:19:38.542 00:19:38.542 18:14:36 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:39.917 Initializing NVMe Controllers 00:19:39.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:39.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:39.917 Initialization complete. Launching workers. 00:19:39.917 ======================================================== 00:19:39.917 Latency(us) 00:19:39.917 Device Information : IOPS MiB/s Average min max 00:19:39.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4005.08 15.64 248.36 99.35 5047.00 00:19:39.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.88 0.48 8137.84 6878.19 12037.40 00:19:39.917 ======================================================== 00:19:39.917 Total : 4127.96 16.12 483.22 99.35 12037.40 00:19:39.917 00:19:39.917 18:14:37 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.296 Initializing NVMe Controllers 00:19:41.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:41.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:41.296 Initialization complete. Launching workers. 00:19:41.296 ======================================================== 00:19:41.296 Latency(us) 00:19:41.296 Device Information : IOPS MiB/s Average min max 00:19:41.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10128.09 39.56 3159.80 542.32 8175.24 00:19:41.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2682.37 10.48 12059.84 5994.49 21879.45 00:19:41.296 ======================================================== 00:19:41.296 Total : 12810.47 50.04 5023.37 542.32 21879.45 00:19:41.296 00:19:41.296 18:14:38 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:41.296 18:14:38 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:43.829 Initializing NVMe Controllers 00:19:43.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.829 Controller IO queue size 128, less than required. 00:19:43.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.829 Controller IO queue size 128, less than required. 00:19:43.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:43.829 Initialization complete. Launching workers. 00:19:43.829 ======================================================== 00:19:43.829 Latency(us) 00:19:43.829 Device Information : IOPS MiB/s Average min max 00:19:43.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1815.97 453.99 71808.07 46503.05 129953.05 00:19:43.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 490.99 122.75 271517.32 83880.62 589887.28 00:19:43.829 ======================================================== 00:19:43.829 Total : 2306.97 576.74 114312.30 46503.05 589887.28 00:19:43.829 00:19:43.829 18:14:41 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:43.829 No valid NVMe controllers or AIO or URING devices found 00:19:43.829 Initializing NVMe Controllers 00:19:43.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.829 Controller IO queue size 128, less than required. 00:19:43.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.829 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:43.829 Controller IO queue size 128, less than required. 00:19:43.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:43.829 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:43.829 WARNING: Some requested NVMe devices were skipped 00:19:43.829 18:14:41 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:46.362 Initializing NVMe Controllers 00:19:46.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.362 Controller IO queue size 128, less than required. 00:19:46.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.362 Controller IO queue size 128, less than required. 00:19:46.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:46.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:46.362 Initialization complete. Launching workers. 00:19:46.362 00:19:46.362 ==================== 00:19:46.362 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:46.362 TCP transport: 00:19:46.362 polls: 6689 00:19:46.362 idle_polls: 3621 00:19:46.362 sock_completions: 3068 00:19:46.362 nvme_completions: 4959 00:19:46.362 submitted_requests: 7643 00:19:46.362 queued_requests: 1 00:19:46.362 00:19:46.362 ==================== 00:19:46.362 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:46.362 TCP transport: 00:19:46.362 polls: 8435 00:19:46.362 idle_polls: 5309 00:19:46.362 sock_completions: 3126 00:19:46.362 nvme_completions: 5974 00:19:46.362 submitted_requests: 9074 00:19:46.362 queued_requests: 1 00:19:46.362 ======================================================== 00:19:46.362 Latency(us) 00:19:46.362 Device Information : IOPS MiB/s Average min max 00:19:46.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1303.15 325.79 101029.32 59889.39 204426.10 00:19:46.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1556.58 389.14 82881.86 36567.90 132588.69 00:19:46.362 ======================================================== 00:19:46.362 Total : 2859.73 714.93 91151.47 36567.90 204426.10 00:19:46.362 00:19:46.362 18:14:44 -- host/perf.sh@66 -- # sync 00:19:46.362 18:14:44 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.620 18:14:44 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:19:46.620 18:14:44 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:19:46.620 18:14:44 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:19:46.878 18:14:44 -- host/perf.sh@72 -- # ls_guid=1c040a7c-920f-4892-a995-8479c97f25f0 00:19:46.878 18:14:44 -- host/perf.sh@73 -- # get_lvs_free_mb 1c040a7c-920f-4892-a995-8479c97f25f0 00:19:46.878 18:14:44 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1c040a7c-920f-4892-a995-8479c97f25f0 00:19:46.878 18:14:44 -- common/autotest_common.sh@1344 -- # local lvs_info 00:19:46.878 18:14:44 -- common/autotest_common.sh@1345 -- # local fc 00:19:46.878 18:14:44 -- common/autotest_common.sh@1346 -- # local cs 00:19:46.878 18:14:44 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:47.137 18:14:44 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:19:47.137 { 00:19:47.137 "base_bdev": "Nvme0n1", 00:19:47.137 "block_size": 4096, 00:19:47.137 "cluster_size": 4194304, 00:19:47.137 "free_clusters": 1278, 00:19:47.137 "name": "lvs_0", 00:19:47.137 "total_data_clusters": 1278, 00:19:47.137 "uuid": "1c040a7c-920f-4892-a995-8479c97f25f0" 00:19:47.137 } 00:19:47.137 ]' 00:19:47.137 18:14:44 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1c040a7c-920f-4892-a995-8479c97f25f0") .free_clusters' 00:19:47.137 18:14:44 -- common/autotest_common.sh@1348 -- # fc=1278 00:19:47.137 18:14:44 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1c040a7c-920f-4892-a995-8479c97f25f0") .cluster_size' 00:19:47.137 5112 00:19:47.137 18:14:45 -- common/autotest_common.sh@1349 -- # cs=4194304 00:19:47.137 18:14:45 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:19:47.137 18:14:45 -- common/autotest_common.sh@1353 -- # echo 5112 00:19:47.137 18:14:45 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:19:47.137 18:14:45 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1c040a7c-920f-4892-a995-8479c97f25f0 lbd_0 5112 00:19:47.394 18:14:45 -- host/perf.sh@80 -- # lb_guid=121de405-82a6-4664-a96e-71124a8fc68c 00:19:47.395 18:14:45 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 121de405-82a6-4664-a96e-71124a8fc68c lvs_n_0 00:19:47.652 18:14:45 -- host/perf.sh@83 -- # ls_nested_guid=d5fe97e9-3b4e-4e83-8426-a19657869298 00:19:47.652 18:14:45 -- host/perf.sh@84 -- # get_lvs_free_mb d5fe97e9-3b4e-4e83-8426-a19657869298 00:19:47.652 18:14:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d5fe97e9-3b4e-4e83-8426-a19657869298 00:19:47.652 18:14:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:19:47.652 18:14:45 -- common/autotest_common.sh@1345 -- # local fc 00:19:47.652 18:14:45 -- common/autotest_common.sh@1346 -- # local cs 00:19:47.652 18:14:45 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:47.910 18:14:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:19:47.910 { 00:19:47.910 "base_bdev": "Nvme0n1", 00:19:47.910 "block_size": 4096, 00:19:47.910 "cluster_size": 4194304, 00:19:47.910 "free_clusters": 0, 00:19:47.910 "name": "lvs_0", 00:19:47.910 "total_data_clusters": 1278, 00:19:47.910 "uuid": "1c040a7c-920f-4892-a995-8479c97f25f0" 00:19:47.910 }, 00:19:47.910 { 00:19:47.910 "base_bdev": "121de405-82a6-4664-a96e-71124a8fc68c", 00:19:47.910 "block_size": 4096, 00:19:47.910 "cluster_size": 4194304, 00:19:47.910 "free_clusters": 1276, 00:19:47.910 "name": "lvs_n_0", 00:19:47.910 "total_data_clusters": 1276, 00:19:47.910 "uuid": "d5fe97e9-3b4e-4e83-8426-a19657869298" 00:19:47.910 } 00:19:47.910 ]' 00:19:47.910 18:14:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d5fe97e9-3b4e-4e83-8426-a19657869298") .free_clusters' 00:19:47.910 18:14:45 -- common/autotest_common.sh@1348 -- # fc=1276 00:19:47.910 18:14:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d5fe97e9-3b4e-4e83-8426-a19657869298") .cluster_size' 00:19:48.167 5104 00:19:48.167 18:14:45 -- common/autotest_common.sh@1349 -- # cs=4194304 00:19:48.167 18:14:45 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:19:48.167 18:14:45 -- common/autotest_common.sh@1353 -- # echo 5104 00:19:48.167 18:14:45 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:19:48.167 18:14:45 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d5fe97e9-3b4e-4e83-8426-a19657869298 lbd_nest_0 5104 00:19:48.167 18:14:46 -- host/perf.sh@88 -- # lb_nested_guid=2ece6452-cd41-48a9-a5e3-523b56ef06bd 00:19:48.167 18:14:46 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.426 18:14:46 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:19:48.426 18:14:46 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2ece6452-cd41-48a9-a5e3-523b56ef06bd 00:19:48.684 18:14:46 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.942 18:14:46 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:19:48.942 18:14:46 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:19:48.942 18:14:46 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:48.942 18:14:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:48.942 18:14:46 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:49.200 No valid NVMe controllers or AIO or URING devices found 00:19:49.200 Initializing NVMe Controllers 00:19:49.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.200 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:49.200 WARNING: Some requested NVMe devices were skipped 00:19:49.200 18:14:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:49.200 18:14:47 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.402 Initializing NVMe Controllers 00:20:01.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:01.402 Initialization complete. Launching workers. 00:20:01.402 ======================================================== 00:20:01.402 Latency(us) 00:20:01.402 Device Information : IOPS MiB/s Average min max 00:20:01.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 766.84 95.86 1303.78 435.00 8438.85 00:20:01.402 ======================================================== 00:20:01.402 Total : 766.84 95.86 1303.78 435.00 8438.85 00:20:01.402 00:20:01.402 18:14:57 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:01.402 18:14:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:01.402 18:14:57 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.402 No valid NVMe controllers or AIO or URING devices found 00:20:01.402 Initializing NVMe Controllers 00:20:01.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.402 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:01.402 WARNING: Some requested NVMe devices were skipped 00:20:01.402 18:14:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:01.402 18:14:57 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:11.412 Initializing NVMe Controllers 00:20:11.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.412 Initialization complete. Launching workers. 00:20:11.412 ======================================================== 00:20:11.412 Latency(us) 00:20:11.412 Device Information : IOPS MiB/s Average min max 00:20:11.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 960.20 120.03 33625.82 7289.66 258704.36 00:20:11.412 ======================================================== 00:20:11.412 Total : 960.20 120.03 33625.82 7289.66 258704.36 00:20:11.412 00:20:11.412 18:15:08 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:11.412 18:15:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:11.412 18:15:08 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:11.412 No valid NVMe controllers or AIO or URING devices found 00:20:11.412 Initializing NVMe Controllers 00:20:11.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.412 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:11.412 WARNING: Some requested NVMe devices were skipped 00:20:11.412 18:15:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:11.412 18:15:08 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:21.390 Initializing NVMe Controllers 00:20:21.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.390 Controller IO queue size 128, less than required. 00:20:21.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.390 Initialization complete. Launching workers. 00:20:21.390 ======================================================== 00:20:21.390 Latency(us) 00:20:21.390 Device Information : IOPS MiB/s Average min max 00:20:21.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3641.98 455.25 35164.68 11956.56 102052.56 00:20:21.390 ======================================================== 00:20:21.390 Total : 3641.98 455.25 35164.68 11956.56 102052.56 00:20:21.390 00:20:21.390 18:15:18 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.390 18:15:18 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ece6452-cd41-48a9-a5e3-523b56ef06bd 00:20:21.390 18:15:19 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:21.650 18:15:19 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 121de405-82a6-4664-a96e-71124a8fc68c 00:20:21.908 18:15:19 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:22.168 18:15:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:22.168 18:15:19 -- host/perf.sh@114 -- # nvmftestfini 00:20:22.168 18:15:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:22.168 18:15:19 -- nvmf/common.sh@116 -- # sync 00:20:22.168 18:15:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.168 18:15:19 -- nvmf/common.sh@119 -- # set +e 00:20:22.168 18:15:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.168 18:15:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.168 rmmod nvme_tcp 00:20:22.168 rmmod nvme_fabrics 00:20:22.168 rmmod nvme_keyring 00:20:22.168 18:15:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.168 18:15:19 -- nvmf/common.sh@123 -- # set -e 00:20:22.168 18:15:19 -- nvmf/common.sh@124 -- # return 0 00:20:22.168 18:15:19 -- nvmf/common.sh@477 -- # '[' -n 81272 ']' 00:20:22.168 18:15:19 -- nvmf/common.sh@478 -- # killprocess 81272 00:20:22.168 18:15:19 -- common/autotest_common.sh@926 -- # '[' -z 81272 ']' 00:20:22.168 18:15:19 -- common/autotest_common.sh@930 -- # kill -0 81272 00:20:22.168 18:15:19 -- common/autotest_common.sh@931 -- # uname 00:20:22.168 18:15:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.168 18:15:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81272 00:20:22.168 killing process with pid 81272 00:20:22.168 18:15:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:22.168 18:15:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:22.168 18:15:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81272' 00:20:22.168 18:15:20 -- common/autotest_common.sh@945 -- # kill 81272 00:20:22.168 18:15:20 -- common/autotest_common.sh@950 -- # wait 81272 00:20:24.072 18:15:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:24.072 18:15:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:24.072 18:15:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:24.072 18:15:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.072 18:15:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:24.072 18:15:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.072 18:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.072 18:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.072 18:15:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:24.072 ************************************ 00:20:24.072 END TEST nvmf_perf 00:20:24.072 ************************************ 00:20:24.072 00:20:24.072 real 0m50.502s 00:20:24.072 user 3m9.613s 00:20:24.072 sys 0m10.505s 00:20:24.072 18:15:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.072 18:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:24.072 18:15:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:24.072 18:15:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:24.072 18:15:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:24.072 18:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:24.072 ************************************ 00:20:24.072 START TEST nvmf_fio_host 00:20:24.072 ************************************ 00:20:24.072 18:15:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:24.072 * Looking for test storage... 00:20:24.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.072 18:15:21 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.072 18:15:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.072 18:15:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.072 18:15:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.072 18:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.072 18:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.072 18:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.072 18:15:21 -- paths/export.sh@5 -- # export PATH 00:20:24.072 18:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.072 18:15:21 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.072 18:15:21 -- nvmf/common.sh@7 -- # uname -s 00:20:24.072 18:15:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.072 18:15:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.072 18:15:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.072 18:15:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.072 18:15:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.072 18:15:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.072 18:15:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.072 18:15:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.072 18:15:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.072 18:15:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.072 18:15:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:20:24.072 18:15:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:20:24.072 18:15:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.072 18:15:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.072 18:15:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.072 18:15:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.072 18:15:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.072 18:15:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.072 18:15:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.073 18:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.073 18:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.073 18:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.073 18:15:21 -- paths/export.sh@5 -- # export PATH 00:20:24.073 18:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.073 18:15:21 -- nvmf/common.sh@46 -- # : 0 00:20:24.073 18:15:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:24.073 18:15:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:24.073 18:15:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:24.073 18:15:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.073 18:15:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.073 18:15:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:24.073 18:15:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:24.073 18:15:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:24.073 18:15:21 -- host/fio.sh@12 -- # nvmftestinit 00:20:24.073 18:15:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:24.073 18:15:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.073 18:15:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:24.073 18:15:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:24.073 18:15:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:24.073 18:15:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.073 18:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.073 18:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.073 18:15:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:24.073 18:15:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:24.073 18:15:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:24.073 18:15:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:24.073 18:15:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:24.073 18:15:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:24.073 18:15:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.073 18:15:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.073 18:15:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.073 18:15:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:24.073 18:15:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.073 18:15:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.073 18:15:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.073 18:15:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.073 18:15:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.073 18:15:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.073 18:15:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.073 18:15:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.073 18:15:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:24.073 18:15:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:24.073 Cannot find device "nvmf_tgt_br" 00:20:24.073 18:15:21 -- nvmf/common.sh@154 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.073 Cannot find device "nvmf_tgt_br2" 00:20:24.073 18:15:21 -- nvmf/common.sh@155 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:24.073 18:15:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:24.073 Cannot find device "nvmf_tgt_br" 00:20:24.073 18:15:21 -- nvmf/common.sh@157 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:24.073 Cannot find device "nvmf_tgt_br2" 00:20:24.073 18:15:21 -- nvmf/common.sh@158 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:24.073 18:15:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:24.073 18:15:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.073 18:15:21 -- nvmf/common.sh@161 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.073 18:15:21 -- nvmf/common.sh@162 -- # true 00:20:24.073 18:15:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.073 18:15:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.073 18:15:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.073 18:15:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.073 18:15:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.333 18:15:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.333 18:15:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.333 18:15:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.333 18:15:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.333 18:15:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:24.333 18:15:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:24.333 18:15:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:24.333 18:15:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:24.333 18:15:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.333 18:15:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.333 18:15:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.333 18:15:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:24.333 18:15:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:24.333 18:15:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.333 18:15:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.333 18:15:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.333 18:15:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.333 18:15:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.333 18:15:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:24.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:20:24.333 00:20:24.333 --- 10.0.0.2 ping statistics --- 00:20:24.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.333 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:24.333 18:15:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:24.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:24.333 00:20:24.333 --- 10.0.0.3 ping statistics --- 00:20:24.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.333 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:24.333 18:15:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:24.333 00:20:24.333 --- 10.0.0.1 ping statistics --- 00:20:24.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.333 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:24.333 18:15:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.333 18:15:22 -- nvmf/common.sh@421 -- # return 0 00:20:24.333 18:15:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.333 18:15:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.333 18:15:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:24.333 18:15:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:24.333 18:15:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.333 18:15:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:24.333 18:15:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.333 18:15:22 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:24.333 18:15:22 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:24.333 18:15:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:24.333 18:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.333 18:15:22 -- host/fio.sh@22 -- # nvmfpid=82243 00:20:24.333 18:15:22 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.333 18:15:22 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.333 18:15:22 -- host/fio.sh@26 -- # waitforlisten 82243 00:20:24.333 18:15:22 -- common/autotest_common.sh@819 -- # '[' -z 82243 ']' 00:20:24.333 18:15:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.333 18:15:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.333 18:15:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.333 18:15:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.333 18:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.333 [2024-04-25 18:15:22.237294] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:24.333 [2024-04-25 18:15:22.237408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.592 [2024-04-25 18:15:22.378035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.592 [2024-04-25 18:15:22.464823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.592 [2024-04-25 18:15:22.465117] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.592 [2024-04-25 18:15:22.465230] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.592 [2024-04-25 18:15:22.465627] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.592 [2024-04-25 18:15:22.465902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.592 [2024-04-25 18:15:22.466070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.592 [2024-04-25 18:15:22.466244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.592 [2024-04-25 18:15:22.466255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.528 18:15:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:25.528 18:15:23 -- common/autotest_common.sh@852 -- # return 0 00:20:25.528 18:15:23 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 [2024-04-25 18:15:23.216988] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:25.528 18:15:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 18:15:23 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 Malloc1 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 [2024-04-25 18:15:23.321953] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:25.528 18:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.528 18:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.528 18:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.528 18:15:23 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:25.528 18:15:23 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:25.528 18:15:23 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:25.528 18:15:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:25.528 18:15:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.528 18:15:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:25.528 18:15:23 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.528 18:15:23 -- common/autotest_common.sh@1320 -- # shift 00:20:25.528 18:15:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:25.528 18:15:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:25.528 18:15:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:25.528 18:15:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:25.528 18:15:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:25.528 18:15:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:25.528 18:15:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:25.528 18:15:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:25.797 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:25.797 fio-3.35 00:20:25.797 Starting 1 thread 00:20:28.344 00:20:28.344 test: (groupid=0, jobs=1): err= 0: pid=82316: Thu Apr 25 18:15:25 2024 00:20:28.344 read: IOPS=9235, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec) 00:20:28.344 slat (nsec): min=1738, max=343713, avg=2341.73, stdev=3524.42 00:20:28.344 clat (usec): min=3569, max=11871, avg=7386.67, stdev=639.26 00:20:28.344 lat (usec): min=3608, max=11873, avg=7389.01, stdev=639.18 00:20:28.344 clat percentiles (usec): 00:20:28.344 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 6849], 00:20:28.344 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7504], 00:20:28.344 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:20:28.344 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10159], 99.95th=[11338], 00:20:28.344 | 99.99th=[11863] 00:20:28.344 bw ( KiB/s): min=36448, max=37712, per=99.93%, avg=36916.00, stdev=558.34, samples=4 00:20:28.344 iops : min= 9112, max= 9428, avg=9229.00, stdev=139.59, samples=4 00:20:28.344 write: IOPS=9239, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec); 0 zone resets 00:20:28.344 slat (nsec): min=1810, max=292281, avg=2372.87, stdev=2693.70 00:20:28.344 clat (usec): min=2627, max=11313, avg=6427.78, stdev=532.56 00:20:28.344 lat (usec): min=2641, max=11315, avg=6430.16, stdev=532.54 00:20:28.344 clat percentiles (usec): 00:20:28.344 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:20:28.344 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:20:28.344 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:20:28.344 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[ 9241], 99.95th=[10028], 00:20:28.344 | 99.99th=[11207] 00:20:28.344 bw ( KiB/s): min=36616, max=37464, per=99.99%, avg=36952.00, stdev=365.56, samples=4 00:20:28.344 iops : min= 9154, max= 9366, avg=9238.00, stdev=91.39, samples=4 00:20:28.344 lat (msec) : 4=0.06%, 10=99.84%, 20=0.09% 00:20:28.344 cpu : usr=64.89%, sys=25.64%, ctx=9, majf=0, minf=6 00:20:28.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:28.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.344 issued rwts: total=18527,18534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.344 00:20:28.344 Run status group 0 (all jobs): 00:20:28.344 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:20:28.344 WRITE: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:20:28.344 18:15:25 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:28.345 18:15:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:28.345 18:15:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:28.345 18:15:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.345 18:15:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:28.345 18:15:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.345 18:15:25 -- common/autotest_common.sh@1320 -- # shift 00:20:28.345 18:15:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:28.345 18:15:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:28.345 18:15:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:28.345 18:15:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:28.345 18:15:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:28.345 18:15:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:28.345 18:15:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:28.345 18:15:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:28.345 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:28.345 fio-3.35 00:20:28.345 Starting 1 thread 00:20:30.877 00:20:30.877 test: (groupid=0, jobs=1): err= 0: pid=82360: Thu Apr 25 18:15:28 2024 00:20:30.877 read: IOPS=8969, BW=140MiB/s (147MB/s)(281MiB/2007msec) 00:20:30.877 slat (usec): min=2, max=104, avg= 3.42, stdev= 2.42 00:20:30.877 clat (usec): min=2513, max=16564, avg=8562.58, stdev=2022.89 00:20:30.877 lat (usec): min=2516, max=16567, avg=8566.00, stdev=2022.98 00:20:30.877 clat percentiles (usec): 00:20:30.877 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6783], 00:20:30.877 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8979], 00:20:30.877 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11076], 95.00th=[11731], 00:20:30.877 | 99.00th=[13829], 99.50th=[14615], 99.90th=[15795], 99.95th=[16188], 00:20:30.877 | 99.99th=[16581] 00:20:30.877 bw ( KiB/s): min=68320, max=77856, per=49.71%, avg=71344.00, stdev=4499.96, samples=4 00:20:30.878 iops : min= 4270, max= 4866, avg=4459.00, stdev=281.25, samples=4 00:20:30.878 write: IOPS=5285, BW=82.6MiB/s (86.6MB/s)(145MiB/1756msec); 0 zone resets 00:20:30.878 slat (usec): min=30, max=340, avg=34.65, stdev= 8.71 00:20:30.878 clat (usec): min=5924, max=16859, avg=10256.64, stdev=1766.28 00:20:30.878 lat (usec): min=5955, max=16891, avg=10291.28, stdev=1766.34 00:20:30.878 clat percentiles (usec): 00:20:30.878 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8848], 00:20:30.878 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:20:30.878 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12649], 95.00th=[13698], 00:20:30.878 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16450], 99.95th=[16712], 00:20:30.878 | 99.99th=[16909] 00:20:30.878 bw ( KiB/s): min=71360, max=80192, per=87.80%, avg=74256.00, stdev=4081.10, samples=4 00:20:30.878 iops : min= 4460, max= 5012, avg=4641.00, stdev=255.07, samples=4 00:20:30.878 lat (msec) : 4=0.22%, 10=65.55%, 20=34.23% 00:20:30.878 cpu : usr=69.94%, sys=19.79%, ctx=4, majf=0, minf=21 00:20:30.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:30.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.878 issued rwts: total=18002,9282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.878 00:20:30.878 Run status group 0 (all jobs): 00:20:30.878 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (295MB), run=2007-2007msec 00:20:30.878 WRITE: bw=82.6MiB/s (86.6MB/s), 82.6MiB/s-82.6MiB/s (86.6MB/s-86.6MB/s), io=145MiB (152MB), run=1756-1756msec 00:20:30.878 18:15:28 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:20:30.878 18:15:28 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:20:30.878 18:15:28 -- host/fio.sh@49 -- # get_nvme_bdfs 00:20:30.878 18:15:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:30.878 18:15:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:30.878 18:15:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:30.878 18:15:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:30.878 18:15:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:30.878 18:15:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:30.878 18:15:28 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 Nvme0n1 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@51 -- # ls_guid=c3f63e4a-af8c-421b-bca6-ced22517e7ea 00:20:30.878 18:15:28 -- host/fio.sh@52 -- # get_lvs_free_mb c3f63e4a-af8c-421b-bca6-ced22517e7ea 00:20:30.878 18:15:28 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c3f63e4a-af8c-421b-bca6-ced22517e7ea 00:20:30.878 18:15:28 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:30.878 18:15:28 -- common/autotest_common.sh@1345 -- # local fc 00:20:30.878 18:15:28 -- common/autotest_common.sh@1346 -- # local cs 00:20:30.878 18:15:28 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:30.878 { 00:20:30.878 "base_bdev": "Nvme0n1", 00:20:30.878 "block_size": 4096, 00:20:30.878 "cluster_size": 1073741824, 00:20:30.878 "free_clusters": 4, 00:20:30.878 "name": "lvs_0", 00:20:30.878 "total_data_clusters": 4, 00:20:30.878 "uuid": "c3f63e4a-af8c-421b-bca6-ced22517e7ea" 00:20:30.878 } 00:20:30.878 ]' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c3f63e4a-af8c-421b-bca6-ced22517e7ea") .free_clusters' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1348 -- # fc=4 00:20:30.878 18:15:28 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c3f63e4a-af8c-421b-bca6-ced22517e7ea") .cluster_size' 00:20:30.878 4096 00:20:30.878 18:15:28 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:20:30.878 18:15:28 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:20:30.878 18:15:28 -- common/autotest_common.sh@1353 -- # echo 4096 00:20:30.878 18:15:28 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 d9ac7ce1-0455-4af8-ad5c-3862507041b5 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:30.878 18:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.878 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.878 18:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.878 18:15:28 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:30.878 18:15:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:30.878 18:15:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:30.878 18:15:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.878 18:15:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:30.878 18:15:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.878 18:15:28 -- common/autotest_common.sh@1320 -- # shift 00:20:30.878 18:15:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:30.878 18:15:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:30.878 18:15:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:30.878 18:15:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:30.878 18:15:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:30.878 18:15:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:30.878 18:15:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:30.878 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:30.878 fio-3.35 00:20:30.878 Starting 1 thread 00:20:33.410 00:20:33.410 test: (groupid=0, jobs=1): err= 0: pid=82443: Thu Apr 25 18:15:31 2024 00:20:33.410 read: IOPS=6161, BW=24.1MiB/s (25.2MB/s)(49.3MiB/2049msec) 00:20:33.410 slat (nsec): min=1883, max=343960, avg=2924.70, stdev=4747.32 00:20:33.410 clat (usec): min=4289, max=61093, avg=11032.86, stdev=3204.80 00:20:33.410 lat (usec): min=4299, max=61095, avg=11035.79, stdev=3204.70 00:20:33.410 clat percentiles (usec): 00:20:33.410 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:20:33.410 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:20:33.410 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 00:20:33.410 | 99.00th=[13435], 99.50th=[14484], 99.90th=[57410], 99.95th=[58459], 00:20:33.410 | 99.99th=[61080] 00:20:33.410 bw ( KiB/s): min=23960, max=25760, per=100.00%, avg=25124.00, stdev=795.57, samples=4 00:20:33.410 iops : min= 5990, max= 6440, avg=6281.00, stdev=198.89, samples=4 00:20:33.410 write: IOPS=6150, BW=24.0MiB/s (25.2MB/s)(49.2MiB/2049msec); 0 zone resets 00:20:33.410 slat (nsec): min=1977, max=267462, avg=3079.24, stdev=3688.60 00:20:33.410 clat (usec): min=2693, max=58204, avg=9667.03, stdev=3247.39 00:20:33.410 lat (usec): min=2706, max=58206, avg=9670.10, stdev=3247.32 00:20:33.410 clat percentiles (usec): 00:20:33.410 | 1.00th=[ 7439], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:20:33.410 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:20:33.410 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:20:33.410 | 99.00th=[11731], 99.50th=[48497], 99.90th=[55837], 99.95th=[56361], 00:20:33.410 | 99.99th=[57410] 00:20:33.410 bw ( KiB/s): min=24832, max=25280, per=100.00%, avg=25074.00, stdev=210.08, samples=4 00:20:33.410 iops : min= 6208, max= 6320, avg=6268.50, stdev=52.52, samples=4 00:20:33.410 lat (msec) : 4=0.04%, 10=47.22%, 20=52.24%, 50=0.10%, 100=0.40% 00:20:33.410 cpu : usr=68.36%, sys=23.54%, ctx=531, majf=0, minf=25 00:20:33.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:33.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:33.410 issued rwts: total=12624,12602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:33.410 00:20:33.410 Run status group 0 (all jobs): 00:20:33.410 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=49.3MiB (51.7MB), run=2049-2049msec 00:20:33.410 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=49.2MiB (51.6MB), run=2049-2049msec 00:20:33.410 18:15:31 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:33.410 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.410 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.410 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.410 18:15:31 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:33.410 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.410 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.410 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.410 18:15:31 -- host/fio.sh@62 -- # ls_nested_guid=4bc8f802-aa8c-49fa-ba7b-a5de09740dbd 00:20:33.410 18:15:31 -- host/fio.sh@63 -- # get_lvs_free_mb 4bc8f802-aa8c-49fa-ba7b-a5de09740dbd 00:20:33.410 18:15:31 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4bc8f802-aa8c-49fa-ba7b-a5de09740dbd 00:20:33.410 18:15:31 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:33.410 18:15:31 -- common/autotest_common.sh@1345 -- # local fc 00:20:33.410 18:15:31 -- common/autotest_common.sh@1346 -- # local cs 00:20:33.410 18:15:31 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:33.410 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.410 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.410 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.410 18:15:31 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:33.411 { 00:20:33.411 "base_bdev": "Nvme0n1", 00:20:33.411 "block_size": 4096, 00:20:33.411 "cluster_size": 1073741824, 00:20:33.411 "free_clusters": 0, 00:20:33.411 "name": "lvs_0", 00:20:33.411 "total_data_clusters": 4, 00:20:33.411 "uuid": "c3f63e4a-af8c-421b-bca6-ced22517e7ea" 00:20:33.411 }, 00:20:33.411 { 00:20:33.411 "base_bdev": "d9ac7ce1-0455-4af8-ad5c-3862507041b5", 00:20:33.411 "block_size": 4096, 00:20:33.411 "cluster_size": 4194304, 00:20:33.411 "free_clusters": 1022, 00:20:33.411 "name": "lvs_n_0", 00:20:33.411 "total_data_clusters": 1022, 00:20:33.411 "uuid": "4bc8f802-aa8c-49fa-ba7b-a5de09740dbd" 00:20:33.411 } 00:20:33.411 ]' 00:20:33.411 18:15:31 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4bc8f802-aa8c-49fa-ba7b-a5de09740dbd") .free_clusters' 00:20:33.411 18:15:31 -- common/autotest_common.sh@1348 -- # fc=1022 00:20:33.411 18:15:31 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4bc8f802-aa8c-49fa-ba7b-a5de09740dbd") .cluster_size' 00:20:33.411 4088 00:20:33.411 18:15:31 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:33.411 18:15:31 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:20:33.411 18:15:31 -- common/autotest_common.sh@1353 -- # echo 4088 00:20:33.411 18:15:31 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:33.411 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.411 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 c1e44f3e-6b52-49be-8952-d240ceeb5272 00:20:33.411 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.411 18:15:31 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:33.411 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.411 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.411 18:15:31 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:33.411 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.411 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.411 18:15:31 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:33.411 18:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.411 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 18:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.411 18:15:31 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.411 18:15:31 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.411 18:15:31 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:33.411 18:15:31 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.411 18:15:31 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:33.411 18:15:31 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.411 18:15:31 -- common/autotest_common.sh@1320 -- # shift 00:20:33.411 18:15:31 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:33.411 18:15:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:33.411 18:15:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:33.411 18:15:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:33.411 18:15:31 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:33.411 18:15:31 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:33.411 18:15:31 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:33.411 18:15:31 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.670 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:33.670 fio-3.35 00:20:33.670 Starting 1 thread 00:20:36.201 00:20:36.201 test: (groupid=0, jobs=1): err= 0: pid=82499: Thu Apr 25 18:15:33 2024 00:20:36.201 read: IOPS=5661, BW=22.1MiB/s (23.2MB/s)(44.5MiB/2010msec) 00:20:36.201 slat (usec): min=2, max=339, avg= 3.07, stdev= 4.80 00:20:36.201 clat (usec): min=4864, max=21068, avg=12092.62, stdev=1147.07 00:20:36.201 lat (usec): min=4874, max=21070, avg=12095.69, stdev=1146.86 00:20:36.201 clat percentiles (usec): 00:20:36.201 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:20:36.201 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:20:36.201 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:20:36.201 | 99.00th=[14877], 99.50th=[15139], 99.90th=[17957], 99.95th=[18482], 00:20:36.201 | 99.99th=[20841] 00:20:36.201 bw ( KiB/s): min=21544, max=23096, per=99.92%, avg=22628.00, stdev=733.17, samples=4 00:20:36.201 iops : min= 5386, max= 5774, avg=5657.00, stdev=183.29, samples=4 00:20:36.201 write: IOPS=5627, BW=22.0MiB/s (23.0MB/s)(44.2MiB/2010msec); 0 zone resets 00:20:36.201 slat (usec): min=2, max=285, avg= 3.26, stdev= 4.12 00:20:36.201 clat (usec): min=2644, max=20684, avg=10506.84, stdev=1025.04 00:20:36.201 lat (usec): min=2658, max=20686, avg=10510.11, stdev=1024.93 00:20:36.201 clat percentiles (usec): 00:20:36.201 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:20:36.201 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:20:36.201 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[11994], 00:20:36.201 | 99.00th=[12649], 99.50th=[13173], 99.90th=[18220], 99.95th=[19792], 00:20:36.201 | 99.99th=[20579] 00:20:36.201 bw ( KiB/s): min=22344, max=22760, per=99.96%, avg=22500.00, stdev=193.93, samples=4 00:20:36.201 iops : min= 5586, max= 5690, avg=5625.00, stdev=48.48, samples=4 00:20:36.201 lat (msec) : 4=0.03%, 10=15.57%, 20=84.38%, 50=0.02% 00:20:36.201 cpu : usr=69.19%, sys=23.20%, ctx=5, majf=0, minf=25 00:20:36.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:36.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:36.201 issued rwts: total=11380,11311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:36.201 00:20:36.201 Run status group 0 (all jobs): 00:20:36.201 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.6MB), run=2010-2010msec 00:20:36.201 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=44.2MiB (46.3MB), run=2010-2010msec 00:20:36.201 18:15:33 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:36.201 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.201 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 18:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.202 18:15:33 -- host/fio.sh@72 -- # sync 00:20:36.202 18:15:33 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:36.202 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.202 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 18:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.202 18:15:33 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:20:36.202 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.202 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 18:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.202 18:15:33 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:20:36.202 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.202 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 18:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.202 18:15:33 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:20:36.202 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.202 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 18:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.202 18:15:33 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:20:36.202 18:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.202 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:20:36.769 18:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.769 18:15:34 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:36.769 18:15:34 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:36.769 18:15:34 -- host/fio.sh@84 -- # nvmftestfini 00:20:36.769 18:15:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:36.769 18:15:34 -- nvmf/common.sh@116 -- # sync 00:20:36.769 18:15:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:36.769 18:15:34 -- nvmf/common.sh@119 -- # set +e 00:20:36.769 18:15:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:36.769 18:15:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:36.769 rmmod nvme_tcp 00:20:36.769 rmmod nvme_fabrics 00:20:36.769 rmmod nvme_keyring 00:20:36.769 18:15:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:36.769 18:15:34 -- nvmf/common.sh@123 -- # set -e 00:20:36.769 18:15:34 -- nvmf/common.sh@124 -- # return 0 00:20:36.769 18:15:34 -- nvmf/common.sh@477 -- # '[' -n 82243 ']' 00:20:36.769 18:15:34 -- nvmf/common.sh@478 -- # killprocess 82243 00:20:36.769 18:15:34 -- common/autotest_common.sh@926 -- # '[' -z 82243 ']' 00:20:36.769 18:15:34 -- common/autotest_common.sh@930 -- # kill -0 82243 00:20:36.769 18:15:34 -- common/autotest_common.sh@931 -- # uname 00:20:36.769 18:15:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:36.769 18:15:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82243 00:20:36.769 killing process with pid 82243 00:20:36.769 18:15:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:36.769 18:15:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:36.769 18:15:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82243' 00:20:36.769 18:15:34 -- common/autotest_common.sh@945 -- # kill 82243 00:20:36.769 18:15:34 -- common/autotest_common.sh@950 -- # wait 82243 00:20:37.028 18:15:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:37.028 18:15:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:37.028 18:15:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:37.028 18:15:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.028 18:15:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:37.028 18:15:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.028 18:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.028 18:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.028 18:15:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:37.028 ************************************ 00:20:37.028 END TEST nvmf_fio_host 00:20:37.028 ************************************ 00:20:37.028 00:20:37.028 real 0m13.128s 00:20:37.028 user 0m54.060s 00:20:37.028 sys 0m3.651s 00:20:37.028 18:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.028 18:15:34 -- common/autotest_common.sh@10 -- # set +x 00:20:37.028 18:15:34 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:37.028 18:15:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:37.028 18:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:37.028 18:15:34 -- common/autotest_common.sh@10 -- # set +x 00:20:37.028 ************************************ 00:20:37.028 START TEST nvmf_failover 00:20:37.028 ************************************ 00:20:37.028 18:15:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:37.287 * Looking for test storage... 00:20:37.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:37.287 18:15:35 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.287 18:15:35 -- nvmf/common.sh@7 -- # uname -s 00:20:37.287 18:15:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.287 18:15:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.287 18:15:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.287 18:15:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.287 18:15:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.287 18:15:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.287 18:15:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.287 18:15:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.287 18:15:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.287 18:15:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:20:37.287 18:15:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:20:37.287 18:15:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.287 18:15:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.287 18:15:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.287 18:15:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.287 18:15:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.287 18:15:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.287 18:15:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.287 18:15:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.287 18:15:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.287 18:15:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.287 18:15:35 -- paths/export.sh@5 -- # export PATH 00:20:37.287 18:15:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.287 18:15:35 -- nvmf/common.sh@46 -- # : 0 00:20:37.287 18:15:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:37.287 18:15:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:37.287 18:15:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:37.287 18:15:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.287 18:15:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.287 18:15:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:37.287 18:15:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:37.287 18:15:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:37.287 18:15:35 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.287 18:15:35 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.287 18:15:35 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.287 18:15:35 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.287 18:15:35 -- host/failover.sh@18 -- # nvmftestinit 00:20:37.287 18:15:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:37.287 18:15:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.287 18:15:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:37.287 18:15:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:37.287 18:15:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:37.287 18:15:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.287 18:15:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.287 18:15:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.287 18:15:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:37.287 18:15:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:37.287 18:15:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.287 18:15:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.287 18:15:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:37.287 18:15:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:37.287 18:15:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.287 18:15:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.287 18:15:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.287 18:15:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.287 18:15:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.287 18:15:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.287 18:15:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.287 18:15:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.287 18:15:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:37.287 18:15:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:37.287 Cannot find device "nvmf_tgt_br" 00:20:37.288 18:15:35 -- nvmf/common.sh@154 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.288 Cannot find device "nvmf_tgt_br2" 00:20:37.288 18:15:35 -- nvmf/common.sh@155 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:37.288 18:15:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:37.288 Cannot find device "nvmf_tgt_br" 00:20:37.288 18:15:35 -- nvmf/common.sh@157 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:37.288 Cannot find device "nvmf_tgt_br2" 00:20:37.288 18:15:35 -- nvmf/common.sh@158 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:37.288 18:15:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:37.288 18:15:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.288 18:15:35 -- nvmf/common.sh@161 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.288 18:15:35 -- nvmf/common.sh@162 -- # true 00:20:37.288 18:15:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.288 18:15:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.288 18:15:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.288 18:15:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.288 18:15:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.288 18:15:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.546 18:15:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.546 18:15:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:37.546 18:15:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:37.546 18:15:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:37.546 18:15:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:37.546 18:15:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:37.546 18:15:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:37.546 18:15:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.546 18:15:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.546 18:15:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.546 18:15:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:37.546 18:15:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:37.546 18:15:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.546 18:15:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.546 18:15:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.546 18:15:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.546 18:15:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.546 18:15:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:37.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:37.546 00:20:37.546 --- 10.0.0.2 ping statistics --- 00:20:37.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.546 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:37.546 18:15:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:37.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:37.546 00:20:37.546 --- 10.0.0.3 ping statistics --- 00:20:37.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.546 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:37.546 18:15:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:20:37.546 00:20:37.546 --- 10.0.0.1 ping statistics --- 00:20:37.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.546 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:37.546 18:15:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.546 18:15:35 -- nvmf/common.sh@421 -- # return 0 00:20:37.546 18:15:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:37.546 18:15:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.546 18:15:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:37.546 18:15:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:37.546 18:15:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.546 18:15:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:37.546 18:15:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:37.546 18:15:35 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:37.546 18:15:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:37.546 18:15:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:37.546 18:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:37.546 18:15:35 -- nvmf/common.sh@469 -- # nvmfpid=82722 00:20:37.546 18:15:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:37.546 18:15:35 -- nvmf/common.sh@470 -- # waitforlisten 82722 00:20:37.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.546 18:15:35 -- common/autotest_common.sh@819 -- # '[' -z 82722 ']' 00:20:37.546 18:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.546 18:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.546 18:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.546 18:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.546 18:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:37.546 [2024-04-25 18:15:35.451306] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:37.546 [2024-04-25 18:15:35.451384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.805 [2024-04-25 18:15:35.583397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:37.805 [2024-04-25 18:15:35.661889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:37.805 [2024-04-25 18:15:35.662369] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.805 [2024-04-25 18:15:35.662428] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.805 [2024-04-25 18:15:35.662564] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.805 [2024-04-25 18:15:35.663092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.805 [2024-04-25 18:15:35.663255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.805 [2024-04-25 18:15:35.663260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.770 18:15:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.770 18:15:36 -- common/autotest_common.sh@852 -- # return 0 00:20:38.771 18:15:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:38.771 18:15:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:38.771 18:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.771 18:15:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.771 18:15:36 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:38.771 [2024-04-25 18:15:36.694049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.029 18:15:36 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:39.288 Malloc0 00:20:39.288 18:15:37 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.547 18:15:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.547 18:15:37 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.806 [2024-04-25 18:15:37.615592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.806 18:15:37 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:40.065 [2024-04-25 18:15:37.819914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:40.065 18:15:37 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:40.324 [2024-04-25 18:15:38.024350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:40.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.324 18:15:38 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:40.324 18:15:38 -- host/failover.sh@31 -- # bdevperf_pid=82828 00:20:40.324 18:15:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.324 18:15:38 -- host/failover.sh@34 -- # waitforlisten 82828 /var/tmp/bdevperf.sock 00:20:40.324 18:15:38 -- common/autotest_common.sh@819 -- # '[' -z 82828 ']' 00:20:40.324 18:15:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.324 18:15:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.324 18:15:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.324 18:15:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.324 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.257 18:15:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.257 18:15:39 -- common/autotest_common.sh@852 -- # return 0 00:20:41.257 18:15:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.516 NVMe0n1 00:20:41.516 18:15:39 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.774 00:20:41.774 18:15:39 -- host/failover.sh@39 -- # run_test_pid=82876 00:20:41.774 18:15:39 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.774 18:15:39 -- host/failover.sh@41 -- # sleep 1 00:20:42.712 18:15:40 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.971 [2024-04-25 18:15:40.863019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.971 [2024-04-25 18:15:40.863184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.863996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 [2024-04-25 18:15:40.864056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa22c20 is same with the state(5) to be set 00:20:42.972 18:15:40 -- host/failover.sh@45 -- # sleep 3 00:20:46.256 18:15:43 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:46.515 00:20:46.515 18:15:44 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:46.515 [2024-04-25 18:15:44.406661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.406997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 [2024-04-25 18:15:44.407112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23b20 is same with the state(5) to be set 00:20:46.515 18:15:44 -- host/failover.sh@50 -- # sleep 3 00:20:49.800 18:15:47 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.800 [2024-04-25 18:15:47.666123] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.800 18:15:47 -- host/failover.sh@55 -- # sleep 1 00:20:51.176 18:15:48 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:51.176 [2024-04-25 18:15:48.931161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.176 [2024-04-25 18:15:48.931451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 [2024-04-25 18:15:48.931791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7f80 is same with the state(5) to be set 00:20:51.177 18:15:48 -- host/failover.sh@59 -- # wait 82876 00:20:57.741 0 00:20:57.741 18:15:54 -- host/failover.sh@61 -- # killprocess 82828 00:20:57.741 18:15:54 -- common/autotest_common.sh@926 -- # '[' -z 82828 ']' 00:20:57.741 18:15:54 -- common/autotest_common.sh@930 -- # kill -0 82828 00:20:57.741 18:15:54 -- common/autotest_common.sh@931 -- # uname 00:20:57.741 18:15:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.741 18:15:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82828 00:20:57.741 killing process with pid 82828 00:20:57.741 18:15:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:57.741 18:15:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:57.741 18:15:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82828' 00:20:57.741 18:15:54 -- common/autotest_common.sh@945 -- # kill 82828 00:20:57.741 18:15:54 -- common/autotest_common.sh@950 -- # wait 82828 00:20:57.741 18:15:54 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:57.741 [2024-04-25 18:15:38.084265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:57.741 [2024-04-25 18:15:38.084377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82828 ] 00:20:57.741 [2024-04-25 18:15:38.220553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.741 [2024-04-25 18:15:38.328065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.741 Running I/O for 15 seconds... 00:20:57.741 [2024-04-25 18:15:40.864548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.741 [2024-04-25 18:15:40.864658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.741 [2024-04-25 18:15:40.864708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.741 [2024-04-25 18:15:40.864737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.741 [2024-04-25 18:15:40.864766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3010 is same with the state(5) to be set 00:20:57.741 [2024-04-25 18:15:40.864860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.864884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.864928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.864959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.864976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.864990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.741 [2024-04-25 18:15:40.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.741 [2024-04-25 18:15:40.865385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.865976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.742 [2024-04-25 18:15:40.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.742 [2024-04-25 18:15:40.866703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.742 [2024-04-25 18:15:40.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.866981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.866997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.743 [2024-04-25 18:15:40.867920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.867964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.867979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.868001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.868015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.743 [2024-04-25 18:15:40.868031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.743 [2024-04-25 18:15:40.868046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.744 [2024-04-25 18:15:40.868678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.868976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.868990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.869019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.869056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.869085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.744 [2024-04-25 18:15:40.869145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ffc0 is same with the state(5) to be set 00:20:57.744 [2024-04-25 18:15:40.869176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.744 [2024-04-25 18:15:40.869217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.744 [2024-04-25 18:15:40.869233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2936 len:8 PRP1 0x0 PRP2 0x0 00:20:57.744 [2024-04-25 18:15:40.869249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.744 [2024-04-25 18:15:40.869336] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe2ffc0 was disconnected and freed. reset controller. 00:20:57.744 [2024-04-25 18:15:40.869361] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:57.744 [2024-04-25 18:15:40.869377] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.744 [2024-04-25 18:15:40.871621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.744 [2024-04-25 18:15:40.871663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:20:57.744 [2024-04-25 18:15:40.890764] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.744 [2024-04-25 18:15:44.407230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.407971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.407986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.745 [2024-04-25 18:15:44.408046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.745 [2024-04-25 18:15:44.408105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.745 [2024-04-25 18:15:44.408136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.745 [2024-04-25 18:15:44.408360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.745 [2024-04-25 18:15:44.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.745 [2024-04-25 18:15:44.408485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.408979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.408993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.746 [2024-04-25 18:15:44.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.746 [2024-04-25 18:15:44.409088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.746 [2024-04-25 18:15:44.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.746 [2024-04-25 18:15:44.409144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.746 [2024-04-25 18:15:44.409696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.746 [2024-04-25 18:15:44.409752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.746 [2024-04-25 18:15:44.409767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.409781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.409809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.409838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.409866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.409904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.409933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.409961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.409976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.409998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.410959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.410974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.410989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.411005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.411020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.411036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.747 [2024-04-25 18:15:44.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.411065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.747 [2024-04-25 18:15:44.411079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.747 [2024-04-25 18:15:44.411094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.748 [2024-04-25 18:15:44.411139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.748 [2024-04-25 18:15:44.411207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.748 [2024-04-25 18:15:44.411244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.748 [2024-04-25 18:15:44.411345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.748 [2024-04-25 18:15:44.411373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:44.411572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe31d10 is same with the state(5) to be set 00:20:57.748 [2024-04-25 18:15:44.411615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.748 [2024-04-25 18:15:44.411628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.748 [2024-04-25 18:15:44.411644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31824 len:8 PRP1 0x0 PRP2 0x0 00:20:57.748 [2024-04-25 18:15:44.411658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411728] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe31d10 was disconnected and freed. reset controller. 00:20:57.748 [2024-04-25 18:15:44.411749] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:57.748 [2024-04-25 18:15:44.411816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.748 [2024-04-25 18:15:44.411839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.748 [2024-04-25 18:15:44.411874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.748 [2024-04-25 18:15:44.411903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.748 [2024-04-25 18:15:44.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:44.411944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.748 [2024-04-25 18:15:44.412005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:20:57.748 [2024-04-25 18:15:44.414056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.748 [2024-04-25 18:15:44.432390] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.748 [2024-04-25 18:15:48.931947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.748 [2024-04-25 18:15:48.932725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.748 [2024-04-25 18:15:48.932741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.932951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.932983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.932999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.933993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.934008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.934024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.749 [2024-04-25 18:15:48.934038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.749 [2024-04-25 18:15:48.934066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.749 [2024-04-25 18:15:48.934082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.934910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.934982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.934998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.750 [2024-04-25 18:15:48.935045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.750 [2024-04-25 18:15:48.935218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.750 [2024-04-25 18:15:48.935233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.751 [2024-04-25 18:15:48.935934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.935980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.751 [2024-04-25 18:15:48.936515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.751 [2024-04-25 18:15:48.936531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe31eb0 is same with the state(5) to be set 00:20:57.751 [2024-04-25 18:15:48.936549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.751 [2024-04-25 18:15:48.936561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.751 [2024-04-25 18:15:48.936573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:20:57.751 [2024-04-25 18:15:48.936588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.752 [2024-04-25 18:15:48.936674] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe31eb0 was disconnected and freed. reset controller. 00:20:57.752 [2024-04-25 18:15:48.936696] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:57.752 [2024-04-25 18:15:48.936763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.752 [2024-04-25 18:15:48.936795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.752 [2024-04-25 18:15:48.936814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.752 [2024-04-25 18:15:48.936829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.752 [2024-04-25 18:15:48.936844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.752 [2024-04-25 18:15:48.936857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.752 [2024-04-25 18:15:48.936873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.752 [2024-04-25 18:15:48.936886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.752 [2024-04-25 18:15:48.936901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.752 [2024-04-25 18:15:48.939016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.752 [2024-04-25 18:15:48.939064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:20:57.752 [2024-04-25 18:15:48.972636] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.752 00:20:57.752 Latency(us) 00:20:57.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.752 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.752 Verification LBA range: start 0x0 length 0x4000 00:20:57.752 NVMe0n1 : 15.01 14583.66 56.97 266.36 0.00 8603.70 543.65 18230.92 00:20:57.752 =================================================================================================================== 00:20:57.752 Total : 14583.66 56.97 266.36 0.00 8603.70 543.65 18230.92 00:20:57.752 Received shutdown signal, test time was about 15.000000 seconds 00:20:57.752 00:20:57.752 Latency(us) 00:20:57.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.752 =================================================================================================================== 00:20:57.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.752 18:15:54 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.752 18:15:54 -- host/failover.sh@65 -- # count=3 00:20:57.752 18:15:54 -- host/failover.sh@67 -- # (( count != 3 )) 00:20:57.752 18:15:54 -- host/failover.sh@73 -- # bdevperf_pid=83080 00:20:57.752 18:15:54 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:57.752 18:15:54 -- host/failover.sh@75 -- # waitforlisten 83080 /var/tmp/bdevperf.sock 00:20:57.752 18:15:54 -- common/autotest_common.sh@819 -- # '[' -z 83080 ']' 00:20:57.752 18:15:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.752 18:15:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:57.752 18:15:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.752 18:15:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:57.752 18:15:54 -- common/autotest_common.sh@10 -- # set +x 00:20:58.317 18:15:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.317 18:15:55 -- common/autotest_common.sh@852 -- # return 0 00:20:58.317 18:15:55 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.317 [2024-04-25 18:15:56.205690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.317 18:15:56 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:58.573 [2024-04-25 18:15:56.413872] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:58.573 18:15:56 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:58.831 NVMe0n1 00:20:58.831 18:15:56 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.089 00:20:59.349 18:15:57 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.349 00:20:59.608 18:15:57 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.608 18:15:57 -- host/failover.sh@82 -- # grep -q NVMe0 00:20:59.866 18:15:57 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.866 18:15:57 -- host/failover.sh@87 -- # sleep 3 00:21:03.152 18:16:00 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.152 18:16:00 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:03.152 18:16:01 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.152 18:16:01 -- host/failover.sh@90 -- # run_test_pid=83217 00:21:03.152 18:16:01 -- host/failover.sh@92 -- # wait 83217 00:21:04.585 0 00:21:04.585 18:16:02 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:04.585 [2024-04-25 18:15:55.043618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:04.585 [2024-04-25 18:15:55.043714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83080 ] 00:21:04.585 [2024-04-25 18:15:55.178159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.585 [2024-04-25 18:15:55.264982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.585 [2024-04-25 18:15:57.754830] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:04.585 [2024-04-25 18:15:57.754940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.585 [2024-04-25 18:15:57.754964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.585 [2024-04-25 18:15:57.754981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.585 [2024-04-25 18:15:57.754993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.585 [2024-04-25 18:15:57.755006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.585 [2024-04-25 18:15:57.755018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.585 [2024-04-25 18:15:57.755031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.585 [2024-04-25 18:15:57.755043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.585 [2024-04-25 18:15:57.755056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.585 [2024-04-25 18:15:57.755100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.585 [2024-04-25 18:15:57.755128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f54010 (9): Bad file descriptor 00:21:04.585 [2024-04-25 18:15:57.761910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:04.585 Running I/O for 1 seconds... 00:21:04.585 00:21:04.585 Latency(us) 00:21:04.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.585 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:04.585 Verification LBA range: start 0x0 length 0x4000 00:21:04.585 NVMe0n1 : 1.01 14791.50 57.78 0.00 0.00 8615.01 1370.30 11558.17 00:21:04.585 =================================================================================================================== 00:21:04.585 Total : 14791.50 57.78 0.00 0.00 8615.01 1370.30 11558.17 00:21:04.585 18:16:02 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.585 18:16:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:04.585 18:16:02 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.842 18:16:02 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.842 18:16:02 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:05.101 18:16:02 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.359 18:16:03 -- host/failover.sh@101 -- # sleep 3 00:21:08.644 18:16:06 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.644 18:16:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:08.644 18:16:06 -- host/failover.sh@108 -- # killprocess 83080 00:21:08.644 18:16:06 -- common/autotest_common.sh@926 -- # '[' -z 83080 ']' 00:21:08.644 18:16:06 -- common/autotest_common.sh@930 -- # kill -0 83080 00:21:08.644 18:16:06 -- common/autotest_common.sh@931 -- # uname 00:21:08.644 18:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:08.644 18:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83080 00:21:08.644 killing process with pid 83080 00:21:08.644 18:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:08.644 18:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:08.644 18:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83080' 00:21:08.644 18:16:06 -- common/autotest_common.sh@945 -- # kill 83080 00:21:08.644 18:16:06 -- common/autotest_common.sh@950 -- # wait 83080 00:21:08.644 18:16:06 -- host/failover.sh@110 -- # sync 00:21:08.902 18:16:06 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.161 18:16:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:09.161 18:16:06 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:09.161 18:16:06 -- host/failover.sh@116 -- # nvmftestfini 00:21:09.161 18:16:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:09.161 18:16:06 -- nvmf/common.sh@116 -- # sync 00:21:09.161 18:16:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:09.161 18:16:06 -- nvmf/common.sh@119 -- # set +e 00:21:09.161 18:16:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:09.161 18:16:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:09.161 rmmod nvme_tcp 00:21:09.161 rmmod nvme_fabrics 00:21:09.161 rmmod nvme_keyring 00:21:09.161 18:16:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:09.161 18:16:06 -- nvmf/common.sh@123 -- # set -e 00:21:09.161 18:16:06 -- nvmf/common.sh@124 -- # return 0 00:21:09.161 18:16:06 -- nvmf/common.sh@477 -- # '[' -n 82722 ']' 00:21:09.161 18:16:06 -- nvmf/common.sh@478 -- # killprocess 82722 00:21:09.161 18:16:06 -- common/autotest_common.sh@926 -- # '[' -z 82722 ']' 00:21:09.161 18:16:06 -- common/autotest_common.sh@930 -- # kill -0 82722 00:21:09.161 18:16:06 -- common/autotest_common.sh@931 -- # uname 00:21:09.161 18:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.161 18:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82722 00:21:09.161 killing process with pid 82722 00:21:09.161 18:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:09.161 18:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:09.161 18:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82722' 00:21:09.161 18:16:06 -- common/autotest_common.sh@945 -- # kill 82722 00:21:09.161 18:16:06 -- common/autotest_common.sh@950 -- # wait 82722 00:21:09.420 18:16:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:09.420 18:16:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:09.420 18:16:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:09.420 18:16:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.420 18:16:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:09.420 18:16:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.420 18:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.420 18:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.420 18:16:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:09.420 00:21:09.420 real 0m32.311s 00:21:09.420 user 2m3.947s 00:21:09.420 sys 0m5.497s 00:21:09.420 18:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.420 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.420 ************************************ 00:21:09.420 END TEST nvmf_failover 00:21:09.420 ************************************ 00:21:09.420 18:16:07 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:09.420 18:16:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:09.420 18:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:09.420 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.420 ************************************ 00:21:09.420 START TEST nvmf_discovery 00:21:09.420 ************************************ 00:21:09.420 18:16:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:09.679 * Looking for test storage... 00:21:09.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.679 18:16:07 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.679 18:16:07 -- nvmf/common.sh@7 -- # uname -s 00:21:09.679 18:16:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.679 18:16:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.679 18:16:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.679 18:16:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.679 18:16:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.679 18:16:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.679 18:16:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.679 18:16:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.679 18:16:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.679 18:16:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:09.679 18:16:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:09.679 18:16:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.679 18:16:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.679 18:16:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.679 18:16:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.679 18:16:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.679 18:16:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.679 18:16:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.679 18:16:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.679 18:16:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.679 18:16:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.679 18:16:07 -- paths/export.sh@5 -- # export PATH 00:21:09.679 18:16:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.679 18:16:07 -- nvmf/common.sh@46 -- # : 0 00:21:09.679 18:16:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.679 18:16:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.679 18:16:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.679 18:16:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.679 18:16:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.679 18:16:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.679 18:16:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.679 18:16:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.679 18:16:07 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:09.679 18:16:07 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:09.679 18:16:07 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:09.679 18:16:07 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:09.679 18:16:07 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:09.679 18:16:07 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:09.679 18:16:07 -- host/discovery.sh@25 -- # nvmftestinit 00:21:09.679 18:16:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:09.679 18:16:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.679 18:16:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:09.679 18:16:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:09.679 18:16:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:09.679 18:16:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.679 18:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.679 18:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.679 18:16:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:09.679 18:16:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:09.679 18:16:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.679 18:16:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.679 18:16:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.679 18:16:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:09.679 18:16:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.679 18:16:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.679 18:16:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.679 18:16:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.679 18:16:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.679 18:16:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.679 18:16:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.679 18:16:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.679 18:16:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:09.679 18:16:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:09.679 Cannot find device "nvmf_tgt_br" 00:21:09.679 18:16:07 -- nvmf/common.sh@154 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.679 Cannot find device "nvmf_tgt_br2" 00:21:09.679 18:16:07 -- nvmf/common.sh@155 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:09.679 18:16:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:09.679 Cannot find device "nvmf_tgt_br" 00:21:09.679 18:16:07 -- nvmf/common.sh@157 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:09.679 Cannot find device "nvmf_tgt_br2" 00:21:09.679 18:16:07 -- nvmf/common.sh@158 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:09.679 18:16:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:09.679 18:16:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.679 18:16:07 -- nvmf/common.sh@161 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.679 18:16:07 -- nvmf/common.sh@162 -- # true 00:21:09.679 18:16:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:09.679 18:16:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:09.680 18:16:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:09.680 18:16:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:09.680 18:16:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.680 18:16:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.680 18:16:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.680 18:16:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.680 18:16:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.680 18:16:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:09.680 18:16:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:09.680 18:16:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:09.680 18:16:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:09.938 18:16:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.938 18:16:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.938 18:16:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.938 18:16:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:09.939 18:16:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:09.939 18:16:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.939 18:16:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.939 18:16:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.939 18:16:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.939 18:16:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.939 18:16:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:09.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:09.939 00:21:09.939 --- 10.0.0.2 ping statistics --- 00:21:09.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.939 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:09.939 18:16:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:09.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:21:09.939 00:21:09.939 --- 10.0.0.3 ping statistics --- 00:21:09.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.939 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:09.939 18:16:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:09.939 00:21:09.939 --- 10.0.0.1 ping statistics --- 00:21:09.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.939 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:09.939 18:16:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.939 18:16:07 -- nvmf/common.sh@421 -- # return 0 00:21:09.939 18:16:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.939 18:16:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.939 18:16:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.939 18:16:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.939 18:16:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.939 18:16:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.939 18:16:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.939 18:16:07 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:09.939 18:16:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:09.939 18:16:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:09.939 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.939 18:16:07 -- nvmf/common.sh@469 -- # nvmfpid=83517 00:21:09.939 18:16:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:09.939 18:16:07 -- nvmf/common.sh@470 -- # waitforlisten 83517 00:21:09.939 18:16:07 -- common/autotest_common.sh@819 -- # '[' -z 83517 ']' 00:21:09.939 18:16:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.939 18:16:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:09.939 18:16:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.939 18:16:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:09.939 18:16:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.939 [2024-04-25 18:16:07.778906] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:09.939 [2024-04-25 18:16:07.778997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.198 [2024-04-25 18:16:07.916368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.198 [2024-04-25 18:16:07.989685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.198 [2024-04-25 18:16:07.989820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.198 [2024-04-25 18:16:07.989832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.198 [2024-04-25 18:16:07.989840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.198 [2024-04-25 18:16:07.989870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.766 18:16:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.766 18:16:08 -- common/autotest_common.sh@852 -- # return 0 00:21:10.766 18:16:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.766 18:16:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.766 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 18:16:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.025 18:16:08 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.025 18:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 [2024-04-25 18:16:08.735243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.025 18:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.025 18:16:08 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:11.025 18:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 [2024-04-25 18:16:08.743400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:11.025 18:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.025 18:16:08 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:11.025 18:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 null0 00:21:11.025 18:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.025 18:16:08 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:11.025 18:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 null1 00:21:11.025 18:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.025 18:16:08 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:11.025 18:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 18:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.025 18:16:08 -- host/discovery.sh@45 -- # hostpid=83566 00:21:11.025 18:16:08 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:11.025 18:16:08 -- host/discovery.sh@46 -- # waitforlisten 83566 /tmp/host.sock 00:21:11.025 18:16:08 -- common/autotest_common.sh@819 -- # '[' -z 83566 ']' 00:21:11.025 18:16:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:11.025 18:16:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.025 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:11.025 18:16:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:11.025 18:16:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.025 18:16:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.025 [2024-04-25 18:16:08.831017] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:11.025 [2024-04-25 18:16:08.831121] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83566 ] 00:21:11.283 [2024-04-25 18:16:08.972447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.283 [2024-04-25 18:16:09.075840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:11.283 [2024-04-25 18:16:09.076040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.227 18:16:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.227 18:16:09 -- common/autotest_common.sh@852 -- # return 0 00:21:12.227 18:16:09 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.227 18:16:09 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:12.227 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.227 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.227 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.227 18:16:09 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:12.228 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:09 -- host/discovery.sh@72 -- # notify_id=0 00:21:12.228 18:16:09 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.228 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # sort 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # xargs 00:21:12.228 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:09 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:12.228 18:16:09 -- host/discovery.sh@79 -- # get_bdev_list 00:21:12.228 18:16:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.228 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.228 18:16:09 -- host/discovery.sh@55 -- # xargs 00:21:12.228 18:16:09 -- host/discovery.sh@55 -- # sort 00:21:12.228 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:09 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:12.228 18:16:09 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:12.228 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:09 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.228 18:16:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # xargs 00:21:12.228 18:16:09 -- host/discovery.sh@59 -- # sort 00:21:12.228 18:16:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:09 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:12.228 18:16:10 -- host/discovery.sh@83 -- # get_bdev_list 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # sort 00:21:12.228 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # xargs 00:21:12.228 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:10 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:12.228 18:16:10 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:12.228 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:10 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:12.228 18:16:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.228 18:16:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.228 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:10 -- host/discovery.sh@59 -- # sort 00:21:12.228 18:16:10 -- host/discovery.sh@59 -- # xargs 00:21:12.228 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.228 18:16:10 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:12.228 18:16:10 -- host/discovery.sh@87 -- # get_bdev_list 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.228 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.228 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # sort 00:21:12.228 18:16:10 -- host/discovery.sh@55 -- # xargs 00:21:12.228 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:12.486 18:16:10 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.486 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 [2024-04-25 18:16:10.187804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.486 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:12.486 18:16:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.486 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 18:16:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.486 18:16:10 -- host/discovery.sh@59 -- # xargs 00:21:12.486 18:16:10 -- host/discovery.sh@59 -- # sort 00:21:12.486 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:12.486 18:16:10 -- host/discovery.sh@93 -- # get_bdev_list 00:21:12.486 18:16:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.486 18:16:10 -- host/discovery.sh@55 -- # sort 00:21:12.486 18:16:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.486 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 18:16:10 -- host/discovery.sh@55 -- # xargs 00:21:12.486 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:12.486 18:16:10 -- host/discovery.sh@94 -- # get_notification_count 00:21:12.486 18:16:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.486 18:16:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:12.486 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@74 -- # notification_count=0 00:21:12.486 18:16:10 -- host/discovery.sh@75 -- # notify_id=0 00:21:12.486 18:16:10 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:12.486 18:16:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 18:16:10 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 18:16:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 18:16:10 -- host/discovery.sh@100 -- # sleep 1 00:21:13.053 [2024-04-25 18:16:10.828253] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:13.053 [2024-04-25 18:16:10.828301] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:13.053 [2024-04-25 18:16:10.828320] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:13.053 [2024-04-25 18:16:10.915378] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:13.053 [2024-04-25 18:16:10.970974] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:13.053 [2024-04-25 18:16:10.971017] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:13.622 18:16:11 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:13.622 18:16:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.622 18:16:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.622 18:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.622 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 18:16:11 -- host/discovery.sh@59 -- # sort 00:21:13.622 18:16:11 -- host/discovery.sh@59 -- # xargs 00:21:13.622 18:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@102 -- # get_bdev_list 00:21:13.622 18:16:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.622 18:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.622 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 18:16:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.622 18:16:11 -- host/discovery.sh@55 -- # sort 00:21:13.622 18:16:11 -- host/discovery.sh@55 -- # xargs 00:21:13.622 18:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:13.622 18:16:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:13.622 18:16:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:13.622 18:16:11 -- host/discovery.sh@63 -- # sort -n 00:21:13.622 18:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.622 18:16:11 -- host/discovery.sh@63 -- # xargs 00:21:13.622 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 18:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:13.622 18:16:11 -- host/discovery.sh@104 -- # get_notification_count 00:21:13.622 18:16:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:13.622 18:16:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:13.622 18:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.622 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 18:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.881 18:16:11 -- host/discovery.sh@74 -- # notification_count=1 00:21:13.881 18:16:11 -- host/discovery.sh@75 -- # notify_id=1 00:21:13.881 18:16:11 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:13.881 18:16:11 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:13.881 18:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.881 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.881 18:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.881 18:16:11 -- host/discovery.sh@109 -- # sleep 1 00:21:14.815 18:16:12 -- host/discovery.sh@110 -- # get_bdev_list 00:21:14.815 18:16:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.815 18:16:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:14.815 18:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.815 18:16:12 -- common/autotest_common.sh@10 -- # set +x 00:21:14.815 18:16:12 -- host/discovery.sh@55 -- # sort 00:21:14.815 18:16:12 -- host/discovery.sh@55 -- # xargs 00:21:14.815 18:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.815 18:16:12 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:14.815 18:16:12 -- host/discovery.sh@111 -- # get_notification_count 00:21:14.815 18:16:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:14.815 18:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.815 18:16:12 -- common/autotest_common.sh@10 -- # set +x 00:21:14.815 18:16:12 -- host/discovery.sh@74 -- # jq '. | length' 00:21:14.815 18:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.815 18:16:12 -- host/discovery.sh@74 -- # notification_count=1 00:21:14.815 18:16:12 -- host/discovery.sh@75 -- # notify_id=2 00:21:14.815 18:16:12 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:14.815 18:16:12 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:14.815 18:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.815 18:16:12 -- common/autotest_common.sh@10 -- # set +x 00:21:14.815 [2024-04-25 18:16:12.720778] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.815 [2024-04-25 18:16:12.721533] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:14.815 [2024-04-25 18:16:12.721603] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:14.815 18:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.815 18:16:12 -- host/discovery.sh@117 -- # sleep 1 00:21:15.074 [2024-04-25 18:16:12.807625] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:15.074 [2024-04-25 18:16:12.870854] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:15.074 [2024-04-25 18:16:12.870877] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:15.074 [2024-04-25 18:16:12.870899] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:16.009 18:16:13 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:16.009 18:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.009 18:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.009 18:16:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 18:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.009 18:16:13 -- host/discovery.sh@59 -- # sort 00:21:16.009 18:16:13 -- host/discovery.sh@59 -- # xargs 00:21:16.009 18:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@119 -- # get_bdev_list 00:21:16.009 18:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.009 18:16:13 -- host/discovery.sh@55 -- # sort 00:21:16.009 18:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.009 18:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.009 18:16:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 18:16:13 -- host/discovery.sh@55 -- # xargs 00:21:16.009 18:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:16.009 18:16:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:16.009 18:16:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:16.009 18:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.009 18:16:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 18:16:13 -- host/discovery.sh@63 -- # sort -n 00:21:16.009 18:16:13 -- host/discovery.sh@63 -- # xargs 00:21:16.009 18:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@121 -- # get_notification_count 00:21:16.009 18:16:13 -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.009 18:16:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:16.009 18:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.009 18:16:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 18:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@74 -- # notification_count=0 00:21:16.009 18:16:13 -- host/discovery.sh@75 -- # notify_id=2 00:21:16.009 18:16:13 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.009 18:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.009 18:16:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 [2024-04-25 18:16:13.925909] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:16.009 [2024-04-25 18:16:13.925967] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:16.009 [2024-04-25 18:16:13.928940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.009 [2024-04-25 18:16:13.928988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.009 [2024-04-25 18:16:13.929016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.009 [2024-04-25 18:16:13.929025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.009 [2024-04-25 18:16:13.929034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.009 [2024-04-25 18:16:13.929042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.009 [2024-04-25 18:16:13.929051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.009 [2024-04-25 18:16:13.929059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.009 [2024-04-25 18:16:13.929067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.009 18:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.009 18:16:13 -- host/discovery.sh@127 -- # sleep 1 00:21:16.009 [2024-04-25 18:16:13.938906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.268 [2024-04-25 18:16:13.948924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.268 [2024-04-25 18:16:13.949050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.268 [2024-04-25 18:16:13.949096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.949112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.949122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.949137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.949150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.949158] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.949167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.949181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:13.959005] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:13.959112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.959156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.959172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.959182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.959196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.959208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.959216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.959224] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.959237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:13.969083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:13.969218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.969281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.969298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.969323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.969339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.969352] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.969361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.969369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.969383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:13.979164] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:13.979275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.979333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.979350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.979360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.979374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.979386] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.979394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.979402] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.979415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:13.989247] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:13.989349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.989394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.989411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.989421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.989436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.989449] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.989457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.989466] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.989479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:13.999318] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:13.999438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.999483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:13.999499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:13.999509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:13.999523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:13.999535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:13.999543] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:13.999551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:13.999564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:14.009396] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:16.269 [2024-04-25 18:16:14.009488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:14.009531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.269 [2024-04-25 18:16:14.009546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac7bd0 with addr=10.0.0.2, port=4420 00:21:16.269 [2024-04-25 18:16:14.009556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7bd0 is same with the state(5) to be set 00:21:16.269 [2024-04-25 18:16:14.009570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7bd0 (9): Bad file descriptor 00:21:16.269 [2024-04-25 18:16:14.009582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.269 [2024-04-25 18:16:14.009590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:16.269 [2024-04-25 18:16:14.009598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.269 [2024-04-25 18:16:14.009611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:16.269 [2024-04-25 18:16:14.011964] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:16.269 [2024-04-25 18:16:14.012007] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:17.212 18:16:14 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:17.212 18:16:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.212 18:16:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.212 18:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:17.212 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:21:17.212 18:16:14 -- host/discovery.sh@59 -- # sort 00:21:17.212 18:16:14 -- host/discovery.sh@59 -- # xargs 00:21:17.212 18:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:17.212 18:16:14 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.212 18:16:14 -- host/discovery.sh@129 -- # get_bdev_list 00:21:17.212 18:16:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.212 18:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:17.212 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:21:17.212 18:16:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.212 18:16:14 -- host/discovery.sh@55 -- # sort 00:21:17.212 18:16:14 -- host/discovery.sh@55 -- # xargs 00:21:17.212 18:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:17.212 18:16:15 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:17.212 18:16:15 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:17.212 18:16:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:17.212 18:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:17.212 18:16:15 -- common/autotest_common.sh@10 -- # set +x 00:21:17.212 18:16:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:17.212 18:16:15 -- host/discovery.sh@63 -- # sort -n 00:21:17.212 18:16:15 -- host/discovery.sh@63 -- # xargs 00:21:17.212 18:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:17.212 18:16:15 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:17.212 18:16:15 -- host/discovery.sh@131 -- # get_notification_count 00:21:17.212 18:16:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:17.212 18:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:17.212 18:16:15 -- common/autotest_common.sh@10 -- # set +x 00:21:17.212 18:16:15 -- host/discovery.sh@74 -- # jq '. | length' 00:21:17.212 18:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:17.483 18:16:15 -- host/discovery.sh@74 -- # notification_count=0 00:21:17.483 18:16:15 -- host/discovery.sh@75 -- # notify_id=2 00:21:17.483 18:16:15 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:17.483 18:16:15 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:17.483 18:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:17.483 18:16:15 -- common/autotest_common.sh@10 -- # set +x 00:21:17.483 18:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:17.483 18:16:15 -- host/discovery.sh@135 -- # sleep 1 00:21:18.416 18:16:16 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:18.416 18:16:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.416 18:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.416 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:18.416 18:16:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.416 18:16:16 -- host/discovery.sh@59 -- # sort 00:21:18.416 18:16:16 -- host/discovery.sh@59 -- # xargs 00:21:18.416 18:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.416 18:16:16 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:18.416 18:16:16 -- host/discovery.sh@137 -- # get_bdev_list 00:21:18.416 18:16:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.416 18:16:16 -- host/discovery.sh@55 -- # sort 00:21:18.416 18:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.416 18:16:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.416 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:18.416 18:16:16 -- host/discovery.sh@55 -- # xargs 00:21:18.416 18:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.416 18:16:16 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:18.416 18:16:16 -- host/discovery.sh@138 -- # get_notification_count 00:21:18.416 18:16:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.416 18:16:16 -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.416 18:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.416 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:18.416 18:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.416 18:16:16 -- host/discovery.sh@74 -- # notification_count=2 00:21:18.416 18:16:16 -- host/discovery.sh@75 -- # notify_id=4 00:21:18.416 18:16:16 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:18.416 18:16:16 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.416 18:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.416 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 [2024-04-25 18:16:17.339260] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:19.792 [2024-04-25 18:16:17.339308] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:19.792 [2024-04-25 18:16:17.339343] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:19.792 [2024-04-25 18:16:17.425380] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:19.792 [2024-04-25 18:16:17.484467] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:19.792 [2024-04-25 18:16:17.484521] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.792 18:16:17 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@640 -- # local es=0 00:21:19.792 18:16:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:19.792 18:16:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 2024/04/25 18:16:17 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:19.792 request: 00:21:19.792 { 00:21:19.792 "method": "bdev_nvme_start_discovery", 00:21:19.792 "params": { 00:21:19.792 "name": "nvme", 00:21:19.792 "trtype": "tcp", 00:21:19.792 "traddr": "10.0.0.2", 00:21:19.792 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:19.792 "adrfam": "ipv4", 00:21:19.792 "trsvcid": "8009", 00:21:19.792 "wait_for_attach": true 00:21:19.792 } 00:21:19.792 } 00:21:19.792 Got JSON-RPC error response 00:21:19.792 GoRPCClient: error on JSON-RPC call 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:19.792 18:16:17 -- common/autotest_common.sh@643 -- # es=1 00:21:19.792 18:16:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:19.792 18:16:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:19.792 18:16:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:19.792 18:16:17 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # sort 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # xargs 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.792 18:16:17 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:19.792 18:16:17 -- host/discovery.sh@147 -- # get_bdev_list 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # sort 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # xargs 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.792 18:16:17 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:19.792 18:16:17 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@640 -- # local es=0 00:21:19.792 18:16:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:19.792 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:19.792 18:16:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 2024/04/25 18:16:17 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:19.792 request: 00:21:19.792 { 00:21:19.792 "method": "bdev_nvme_start_discovery", 00:21:19.792 "params": { 00:21:19.792 "name": "nvme_second", 00:21:19.792 "trtype": "tcp", 00:21:19.792 "traddr": "10.0.0.2", 00:21:19.792 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:19.792 "adrfam": "ipv4", 00:21:19.792 "trsvcid": "8009", 00:21:19.792 "wait_for_attach": true 00:21:19.792 } 00:21:19.792 } 00:21:19.792 Got JSON-RPC error response 00:21:19.792 GoRPCClient: error on JSON-RPC call 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:19.792 18:16:17 -- common/autotest_common.sh@643 -- # es=1 00:21:19.792 18:16:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:19.792 18:16:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:19.792 18:16:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:19.792 18:16:17 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # sort 00:21:19.792 18:16:17 -- host/discovery.sh@67 -- # xargs 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.792 18:16:17 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:19.792 18:16:17 -- host/discovery.sh@153 -- # get_bdev_list 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # sort 00:21:19.792 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.792 18:16:17 -- host/discovery.sh@55 -- # xargs 00:21:19.792 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.792 18:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.051 18:16:17 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:20.051 18:16:17 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.051 18:16:17 -- common/autotest_common.sh@640 -- # local es=0 00:21:20.051 18:16:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.051 18:16:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:20.051 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:20.051 18:16:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:20.051 18:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:20.051 18:16:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.051 18:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.051 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:20.986 [2024-04-25 18:16:18.750582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.986 [2024-04-25 18:16:18.750704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.986 [2024-04-25 18:16:18.750721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1f410 with addr=10.0.0.2, port=8010 00:21:20.986 [2024-04-25 18:16:18.750739] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:20.986 [2024-04-25 18:16:18.750747] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:20.986 [2024-04-25 18:16:18.750756] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:21.925 [2024-04-25 18:16:19.750588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.925 [2024-04-25 18:16:19.750677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.925 [2024-04-25 18:16:19.750695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1f410 with addr=10.0.0.2, port=8010 00:21:21.925 [2024-04-25 18:16:19.750712] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:21.925 [2024-04-25 18:16:19.750720] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:21.925 [2024-04-25 18:16:19.750729] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:22.861 [2024-04-25 18:16:20.750478] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:22.861 2024/04/25 18:16:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:22.861 request: 00:21:22.861 { 00:21:22.861 "method": "bdev_nvme_start_discovery", 00:21:22.861 "params": { 00:21:22.861 "name": "nvme_second", 00:21:22.861 "trtype": "tcp", 00:21:22.861 "traddr": "10.0.0.2", 00:21:22.861 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:22.861 "adrfam": "ipv4", 00:21:22.861 "trsvcid": "8010", 00:21:22.861 "attach_timeout_ms": 3000 00:21:22.861 } 00:21:22.861 } 00:21:22.861 Got JSON-RPC error response 00:21:22.861 GoRPCClient: error on JSON-RPC call 00:21:22.861 18:16:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:22.861 18:16:20 -- common/autotest_common.sh@643 -- # es=1 00:21:22.861 18:16:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:22.861 18:16:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:22.861 18:16:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:22.861 18:16:20 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:22.861 18:16:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:22.861 18:16:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:22.861 18:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.861 18:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:22.861 18:16:20 -- host/discovery.sh@67 -- # sort 00:21:22.861 18:16:20 -- host/discovery.sh@67 -- # xargs 00:21:22.861 18:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.120 18:16:20 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:23.120 18:16:20 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:23.120 18:16:20 -- host/discovery.sh@162 -- # kill 83566 00:21:23.120 18:16:20 -- host/discovery.sh@163 -- # nvmftestfini 00:21:23.120 18:16:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:23.120 18:16:20 -- nvmf/common.sh@116 -- # sync 00:21:23.120 18:16:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:23.120 18:16:20 -- nvmf/common.sh@119 -- # set +e 00:21:23.120 18:16:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:23.120 18:16:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:23.120 rmmod nvme_tcp 00:21:23.120 rmmod nvme_fabrics 00:21:23.120 rmmod nvme_keyring 00:21:23.120 18:16:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:23.120 18:16:20 -- nvmf/common.sh@123 -- # set -e 00:21:23.120 18:16:20 -- nvmf/common.sh@124 -- # return 0 00:21:23.120 18:16:20 -- nvmf/common.sh@477 -- # '[' -n 83517 ']' 00:21:23.120 18:16:20 -- nvmf/common.sh@478 -- # killprocess 83517 00:21:23.120 18:16:20 -- common/autotest_common.sh@926 -- # '[' -z 83517 ']' 00:21:23.120 18:16:20 -- common/autotest_common.sh@930 -- # kill -0 83517 00:21:23.120 18:16:20 -- common/autotest_common.sh@931 -- # uname 00:21:23.120 18:16:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:23.120 18:16:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83517 00:21:23.120 killing process with pid 83517 00:21:23.120 18:16:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:23.120 18:16:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:23.120 18:16:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83517' 00:21:23.120 18:16:20 -- common/autotest_common.sh@945 -- # kill 83517 00:21:23.120 18:16:20 -- common/autotest_common.sh@950 -- # wait 83517 00:21:23.379 18:16:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:23.379 18:16:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:23.379 18:16:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:23.379 18:16:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.379 18:16:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:23.379 18:16:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.379 18:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.379 18:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.379 18:16:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:23.379 ************************************ 00:21:23.380 END TEST nvmf_discovery 00:21:23.380 ************************************ 00:21:23.380 00:21:23.380 real 0m13.936s 00:21:23.380 user 0m27.343s 00:21:23.380 sys 0m1.692s 00:21:23.380 18:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.380 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:21:23.380 18:16:21 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:23.380 18:16:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:23.380 18:16:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:23.380 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:21:23.380 ************************************ 00:21:23.380 START TEST nvmf_discovery_remove_ifc 00:21:23.380 ************************************ 00:21:23.380 18:16:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:23.639 * Looking for test storage... 00:21:23.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:23.639 18:16:21 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.639 18:16:21 -- nvmf/common.sh@7 -- # uname -s 00:21:23.639 18:16:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.639 18:16:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.639 18:16:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.639 18:16:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.639 18:16:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.639 18:16:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.639 18:16:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.639 18:16:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.639 18:16:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.639 18:16:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.639 18:16:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:23.639 18:16:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:23.639 18:16:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.639 18:16:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.639 18:16:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.639 18:16:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.639 18:16:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.639 18:16:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.639 18:16:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.639 18:16:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.639 18:16:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.639 18:16:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.639 18:16:21 -- paths/export.sh@5 -- # export PATH 00:21:23.640 18:16:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.640 18:16:21 -- nvmf/common.sh@46 -- # : 0 00:21:23.640 18:16:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:23.640 18:16:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:23.640 18:16:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:23.640 18:16:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.640 18:16:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.640 18:16:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:23.640 18:16:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:23.640 18:16:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:23.640 18:16:21 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:23.640 18:16:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:23.640 18:16:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.640 18:16:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:23.640 18:16:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:23.640 18:16:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:23.640 18:16:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.640 18:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.640 18:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.640 18:16:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:23.640 18:16:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:23.640 18:16:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:23.640 18:16:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:23.640 18:16:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:23.640 18:16:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:23.640 18:16:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.640 18:16:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.640 18:16:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:23.640 18:16:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:23.640 18:16:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.640 18:16:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.640 18:16:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.640 18:16:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.640 18:16:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.640 18:16:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.640 18:16:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.640 18:16:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.640 18:16:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:23.640 18:16:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:23.640 Cannot find device "nvmf_tgt_br" 00:21:23.640 18:16:21 -- nvmf/common.sh@154 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.640 Cannot find device "nvmf_tgt_br2" 00:21:23.640 18:16:21 -- nvmf/common.sh@155 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:23.640 18:16:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:23.640 Cannot find device "nvmf_tgt_br" 00:21:23.640 18:16:21 -- nvmf/common.sh@157 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:23.640 Cannot find device "nvmf_tgt_br2" 00:21:23.640 18:16:21 -- nvmf/common.sh@158 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:23.640 18:16:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:23.640 18:16:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.640 18:16:21 -- nvmf/common.sh@161 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.640 18:16:21 -- nvmf/common.sh@162 -- # true 00:21:23.640 18:16:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.640 18:16:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.640 18:16:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.640 18:16:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.640 18:16:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.640 18:16:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.898 18:16:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.898 18:16:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:23.898 18:16:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:23.898 18:16:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:23.898 18:16:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:23.898 18:16:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:23.898 18:16:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:23.898 18:16:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.898 18:16:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.898 18:16:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.898 18:16:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:23.898 18:16:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:23.898 18:16:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.898 18:16:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.898 18:16:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.898 18:16:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.898 18:16:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.899 18:16:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:23.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:23.899 00:21:23.899 --- 10.0.0.2 ping statistics --- 00:21:23.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.899 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:23.899 18:16:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:23.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:21:23.899 00:21:23.899 --- 10.0.0.3 ping statistics --- 00:21:23.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.899 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:23.899 18:16:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:23.899 00:21:23.899 --- 10.0.0.1 ping statistics --- 00:21:23.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.899 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:23.899 18:16:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.899 18:16:21 -- nvmf/common.sh@421 -- # return 0 00:21:23.899 18:16:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:23.899 18:16:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.899 18:16:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:23.899 18:16:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:23.899 18:16:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.899 18:16:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:23.899 18:16:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:23.899 18:16:21 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:23.899 18:16:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:23.899 18:16:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:23.899 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:21:23.899 18:16:21 -- nvmf/common.sh@469 -- # nvmfpid=84068 00:21:23.899 18:16:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.899 18:16:21 -- nvmf/common.sh@470 -- # waitforlisten 84068 00:21:23.899 18:16:21 -- common/autotest_common.sh@819 -- # '[' -z 84068 ']' 00:21:23.899 18:16:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.899 18:16:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:23.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.899 18:16:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.899 18:16:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:23.899 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:21:23.899 [2024-04-25 18:16:21.778641] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:23.899 [2024-04-25 18:16:21.778722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.157 [2024-04-25 18:16:21.919804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.157 [2024-04-25 18:16:22.031357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:24.157 [2024-04-25 18:16:22.031519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.157 [2024-04-25 18:16:22.031534] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.157 [2024-04-25 18:16:22.031545] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.157 [2024-04-25 18:16:22.031583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.094 18:16:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.094 18:16:22 -- common/autotest_common.sh@852 -- # return 0 00:21:25.094 18:16:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:25.094 18:16:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:25.094 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:21:25.094 18:16:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.094 18:16:22 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:25.094 18:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:25.094 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:21:25.094 [2024-04-25 18:16:22.840356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.094 [2024-04-25 18:16:22.848454] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:25.094 null0 00:21:25.094 [2024-04-25 18:16:22.880386] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.094 18:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:25.094 18:16:22 -- host/discovery_remove_ifc.sh@59 -- # hostpid=84118 00:21:25.094 18:16:22 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:25.094 18:16:22 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84118 /tmp/host.sock 00:21:25.094 18:16:22 -- common/autotest_common.sh@819 -- # '[' -z 84118 ']' 00:21:25.094 18:16:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:25.094 18:16:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:25.094 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:25.094 18:16:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:25.094 18:16:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:25.094 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:21:25.094 [2024-04-25 18:16:22.962617] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:25.094 [2024-04-25 18:16:22.962721] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84118 ] 00:21:25.358 [2024-04-25 18:16:23.103990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.358 [2024-04-25 18:16:23.215866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:25.358 [2024-04-25 18:16:23.216055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.295 18:16:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:26.295 18:16:23 -- common/autotest_common.sh@852 -- # return 0 00:21:26.295 18:16:23 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.295 18:16:23 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:26.295 18:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:26.295 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:21:26.295 18:16:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:26.295 18:16:23 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:26.295 18:16:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:26.295 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:21:26.295 18:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:26.295 18:16:24 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:26.295 18:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:26.295 18:16:24 -- common/autotest_common.sh@10 -- # set +x 00:21:27.229 [2024-04-25 18:16:25.053281] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:27.229 [2024-04-25 18:16:25.053315] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:27.229 [2024-04-25 18:16:25.053334] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:27.229 [2024-04-25 18:16:25.139415] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:27.487 [2024-04-25 18:16:25.194936] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:27.487 [2024-04-25 18:16:25.195002] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:27.487 [2024-04-25 18:16:25.195029] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:27.487 [2024-04-25 18:16:25.195044] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:27.487 [2024-04-25 18:16:25.195066] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:27.487 18:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:27.487 [2024-04-25 18:16:25.202023] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11e63e0 was disconnected and freed. delete nvme_qpair. 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:27.487 18:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:27.487 18:16:25 -- common/autotest_common.sh@10 -- # set +x 00:21:27.487 18:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:27.487 18:16:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:27.487 18:16:25 -- common/autotest_common.sh@10 -- # set +x 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:27.487 18:16:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:27.487 18:16:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:28.422 18:16:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:28.422 18:16:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.422 18:16:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:28.422 18:16:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:28.422 18:16:26 -- common/autotest_common.sh@10 -- # set +x 00:21:28.422 18:16:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:28.422 18:16:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:28.686 18:16:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:28.686 18:16:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:28.686 18:16:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:29.636 18:16:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:29.636 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:21:29.636 18:16:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:29.636 18:16:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:30.569 18:16:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:30.569 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.569 18:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:30.569 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:30.569 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:21:30.570 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:30.570 18:16:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:30.570 18:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:30.828 18:16:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:30.828 18:16:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.761 18:16:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.761 18:16:29 -- common/autotest_common.sh@10 -- # set +x 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.761 18:16:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:31.761 18:16:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:32.695 18:16:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:32.695 18:16:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.695 18:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.695 18:16:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:32.695 18:16:30 -- common/autotest_common.sh@10 -- # set +x 00:21:32.695 18:16:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:32.695 18:16:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:32.695 18:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.695 [2024-04-25 18:16:30.623086] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:32.695 [2024-04-25 18:16:30.623156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.695 [2024-04-25 18:16:30.623171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.695 [2024-04-25 18:16:30.623182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.695 [2024-04-25 18:16:30.623190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.695 [2024-04-25 18:16:30.623199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.695 [2024-04-25 18:16:30.623208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.695 [2024-04-25 18:16:30.623217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.695 [2024-04-25 18:16:30.623225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.695 [2024-04-25 18:16:30.623234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:32.695 [2024-04-25 18:16:30.623242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.695 [2024-04-25 18:16:30.623250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11afc40 is same with the state(5) to be set 00:21:32.953 [2024-04-25 18:16:30.633086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11afc40 (9): Bad file descriptor 00:21:32.953 18:16:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:32.953 18:16:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:32.953 [2024-04-25 18:16:30.643108] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:33.889 18:16:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:33.889 18:16:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.889 18:16:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:33.889 18:16:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:33.889 18:16:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.889 18:16:31 -- common/autotest_common.sh@10 -- # set +x 00:21:33.889 18:16:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:33.889 [2024-04-25 18:16:31.678383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:34.826 [2024-04-25 18:16:32.703372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:34.826 [2024-04-25 18:16:32.703495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11afc40 with addr=10.0.0.2, port=4420 00:21:34.826 [2024-04-25 18:16:32.703527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11afc40 is same with the state(5) to be set 00:21:34.826 [2024-04-25 18:16:32.704362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11afc40 (9): Bad file descriptor 00:21:34.826 [2024-04-25 18:16:32.704448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.826 [2024-04-25 18:16:32.704499] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:34.826 [2024-04-25 18:16:32.704562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.826 [2024-04-25 18:16:32.704590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.826 [2024-04-25 18:16:32.704614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.826 [2024-04-25 18:16:32.704633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.826 [2024-04-25 18:16:32.704653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.826 [2024-04-25 18:16:32.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.826 [2024-04-25 18:16:32.704694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.826 [2024-04-25 18:16:32.704716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.826 [2024-04-25 18:16:32.704736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.826 [2024-04-25 18:16:32.704755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.826 [2024-04-25 18:16:32.704774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:34.826 [2024-04-25 18:16:32.704831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11571c0 (9): Bad file descriptor 00:21:34.826 [2024-04-25 18:16:32.705833] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:34.826 [2024-04-25 18:16:32.705898] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:34.826 18:16:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.826 18:16:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:34.826 18:16:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:36.200 18:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.200 18:16:33 -- common/autotest_common.sh@10 -- # set +x 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:36.200 18:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:36.200 18:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:36.200 18:16:33 -- common/autotest_common.sh@10 -- # set +x 00:21:36.200 18:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:36.200 18:16:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:36.793 [2024-04-25 18:16:34.714848] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:36.793 [2024-04-25 18:16:34.714872] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:36.793 [2024-04-25 18:16:34.714905] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.051 [2024-04-25 18:16:34.800947] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:37.051 [2024-04-25 18:16:34.855779] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:37.051 [2024-04-25 18:16:34.855840] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:37.051 [2024-04-25 18:16:34.855863] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:37.051 [2024-04-25 18:16:34.855877] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:37.051 [2024-04-25 18:16:34.855885] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:37.051 [2024-04-25 18:16:34.863347] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11a03d0 was disconnected and freed. delete nvme_qpair. 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.051 18:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.051 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.051 18:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:37.051 18:16:34 -- host/discovery_remove_ifc.sh@90 -- # killprocess 84118 00:21:37.051 18:16:34 -- common/autotest_common.sh@926 -- # '[' -z 84118 ']' 00:21:37.051 18:16:34 -- common/autotest_common.sh@930 -- # kill -0 84118 00:21:37.051 18:16:34 -- common/autotest_common.sh@931 -- # uname 00:21:37.051 18:16:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.051 18:16:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84118 00:21:37.051 killing process with pid 84118 00:21:37.051 18:16:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:37.051 18:16:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:37.051 18:16:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84118' 00:21:37.051 18:16:34 -- common/autotest_common.sh@945 -- # kill 84118 00:21:37.051 18:16:34 -- common/autotest_common.sh@950 -- # wait 84118 00:21:37.309 18:16:35 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:37.309 18:16:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.309 18:16:35 -- nvmf/common.sh@116 -- # sync 00:21:37.309 18:16:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.309 18:16:35 -- nvmf/common.sh@119 -- # set +e 00:21:37.309 18:16:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.309 18:16:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.309 rmmod nvme_tcp 00:21:37.309 rmmod nvme_fabrics 00:21:37.567 rmmod nvme_keyring 00:21:37.567 18:16:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.567 18:16:35 -- nvmf/common.sh@123 -- # set -e 00:21:37.567 18:16:35 -- nvmf/common.sh@124 -- # return 0 00:21:37.567 18:16:35 -- nvmf/common.sh@477 -- # '[' -n 84068 ']' 00:21:37.567 18:16:35 -- nvmf/common.sh@478 -- # killprocess 84068 00:21:37.567 18:16:35 -- common/autotest_common.sh@926 -- # '[' -z 84068 ']' 00:21:37.567 18:16:35 -- common/autotest_common.sh@930 -- # kill -0 84068 00:21:37.567 18:16:35 -- common/autotest_common.sh@931 -- # uname 00:21:37.567 18:16:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.567 18:16:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84068 00:21:37.567 killing process with pid 84068 00:21:37.567 18:16:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.567 18:16:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.567 18:16:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84068' 00:21:37.567 18:16:35 -- common/autotest_common.sh@945 -- # kill 84068 00:21:37.567 18:16:35 -- common/autotest_common.sh@950 -- # wait 84068 00:21:37.826 18:16:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.826 18:16:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.826 18:16:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.826 18:16:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.826 18:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.826 18:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.826 18:16:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:37.826 00:21:37.826 real 0m14.290s 00:21:37.826 user 0m24.632s 00:21:37.826 sys 0m1.559s 00:21:37.826 18:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.826 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:21:37.826 ************************************ 00:21:37.826 END TEST nvmf_discovery_remove_ifc 00:21:37.826 ************************************ 00:21:37.826 18:16:35 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:21:37.826 18:16:35 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:37.826 18:16:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:37.826 18:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:37.826 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:21:37.826 ************************************ 00:21:37.826 START TEST nvmf_digest 00:21:37.826 ************************************ 00:21:37.826 18:16:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:37.826 * Looking for test storage... 00:21:37.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:37.826 18:16:35 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:37.826 18:16:35 -- nvmf/common.sh@7 -- # uname -s 00:21:37.826 18:16:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.826 18:16:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.826 18:16:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.826 18:16:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.826 18:16:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.826 18:16:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.826 18:16:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.826 18:16:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.826 18:16:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.826 18:16:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:37.826 18:16:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:21:37.826 18:16:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.826 18:16:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.826 18:16:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:37.826 18:16:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:37.826 18:16:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.826 18:16:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.826 18:16:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.826 18:16:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.826 18:16:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.826 18:16:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.826 18:16:35 -- paths/export.sh@5 -- # export PATH 00:21:37.826 18:16:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.826 18:16:35 -- nvmf/common.sh@46 -- # : 0 00:21:37.826 18:16:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:37.826 18:16:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:37.826 18:16:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:37.826 18:16:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.826 18:16:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.826 18:16:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:37.826 18:16:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:37.826 18:16:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:37.826 18:16:35 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:37.826 18:16:35 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:37.826 18:16:35 -- host/digest.sh@16 -- # runtime=2 00:21:37.826 18:16:35 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:21:37.826 18:16:35 -- host/digest.sh@132 -- # nvmftestinit 00:21:37.826 18:16:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:37.826 18:16:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.826 18:16:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:37.826 18:16:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:37.826 18:16:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:37.826 18:16:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.826 18:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.826 18:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.826 18:16:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:37.826 18:16:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:37.826 18:16:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.826 18:16:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.826 18:16:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:37.826 18:16:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:37.826 18:16:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:37.826 18:16:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:37.826 18:16:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:37.826 18:16:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.826 18:16:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:37.826 18:16:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:37.826 18:16:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:37.826 18:16:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:37.826 18:16:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:37.826 18:16:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:37.826 Cannot find device "nvmf_tgt_br" 00:21:37.826 18:16:35 -- nvmf/common.sh@154 -- # true 00:21:37.826 18:16:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.826 Cannot find device "nvmf_tgt_br2" 00:21:37.826 18:16:35 -- nvmf/common.sh@155 -- # true 00:21:37.826 18:16:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:38.084 18:16:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:38.084 Cannot find device "nvmf_tgt_br" 00:21:38.084 18:16:35 -- nvmf/common.sh@157 -- # true 00:21:38.084 18:16:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:38.084 Cannot find device "nvmf_tgt_br2" 00:21:38.084 18:16:35 -- nvmf/common.sh@158 -- # true 00:21:38.084 18:16:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:38.084 18:16:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:38.084 18:16:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.084 18:16:35 -- nvmf/common.sh@161 -- # true 00:21:38.084 18:16:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.084 18:16:35 -- nvmf/common.sh@162 -- # true 00:21:38.084 18:16:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:38.084 18:16:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:38.084 18:16:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:38.084 18:16:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:38.084 18:16:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.084 18:16:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.084 18:16:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.084 18:16:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:38.084 18:16:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:38.084 18:16:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:38.084 18:16:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:38.084 18:16:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:38.084 18:16:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:38.084 18:16:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.084 18:16:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.084 18:16:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.084 18:16:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:38.084 18:16:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:38.084 18:16:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.084 18:16:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.084 18:16:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.084 18:16:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.084 18:16:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.084 18:16:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:38.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:38.084 00:21:38.084 --- 10.0.0.2 ping statistics --- 00:21:38.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.084 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:38.084 18:16:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:38.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:21:38.085 00:21:38.085 --- 10.0.0.3 ping statistics --- 00:21:38.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.085 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:38.085 18:16:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:38.343 00:21:38.343 --- 10.0.0.1 ping statistics --- 00:21:38.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.343 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:38.343 18:16:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.343 18:16:36 -- nvmf/common.sh@421 -- # return 0 00:21:38.343 18:16:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:38.343 18:16:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.343 18:16:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:38.343 18:16:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:38.343 18:16:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.343 18:16:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:38.343 18:16:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:38.343 18:16:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:38.343 18:16:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:21:38.343 18:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:38.343 18:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:38.343 18:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:38.343 ************************************ 00:21:38.343 START TEST nvmf_digest_clean 00:21:38.343 ************************************ 00:21:38.343 18:16:36 -- common/autotest_common.sh@1104 -- # run_digest 00:21:38.343 18:16:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:21:38.343 18:16:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:38.343 18:16:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:38.343 18:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:38.343 18:16:36 -- nvmf/common.sh@469 -- # nvmfpid=84536 00:21:38.343 18:16:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:38.343 18:16:36 -- nvmf/common.sh@470 -- # waitforlisten 84536 00:21:38.343 18:16:36 -- common/autotest_common.sh@819 -- # '[' -z 84536 ']' 00:21:38.343 18:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.343 18:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:38.343 18:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.343 18:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:38.343 18:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:38.343 [2024-04-25 18:16:36.105409] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:38.343 [2024-04-25 18:16:36.105508] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.343 [2024-04-25 18:16:36.238188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.601 [2024-04-25 18:16:36.312034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:38.601 [2024-04-25 18:16:36.312181] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.601 [2024-04-25 18:16:36.312194] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.601 [2024-04-25 18:16:36.312201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.601 [2024-04-25 18:16:36.312229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.168 18:16:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:39.168 18:16:37 -- common/autotest_common.sh@852 -- # return 0 00:21:39.168 18:16:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:39.168 18:16:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:39.168 18:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:39.168 18:16:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.168 18:16:37 -- host/digest.sh@120 -- # common_target_config 00:21:39.168 18:16:37 -- host/digest.sh@43 -- # rpc_cmd 00:21:39.168 18:16:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:39.168 18:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:39.427 null0 00:21:39.427 [2024-04-25 18:16:37.164742] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.427 [2024-04-25 18:16:37.188857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.427 18:16:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:39.427 18:16:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:21:39.427 18:16:37 -- host/digest.sh@77 -- # local rw bs qd 00:21:39.427 18:16:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:39.427 18:16:37 -- host/digest.sh@80 -- # rw=randread 00:21:39.427 18:16:37 -- host/digest.sh@80 -- # bs=4096 00:21:39.427 18:16:37 -- host/digest.sh@80 -- # qd=128 00:21:39.427 18:16:37 -- host/digest.sh@82 -- # bperfpid=84586 00:21:39.427 18:16:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:39.427 18:16:37 -- host/digest.sh@83 -- # waitforlisten 84586 /var/tmp/bperf.sock 00:21:39.427 18:16:37 -- common/autotest_common.sh@819 -- # '[' -z 84586 ']' 00:21:39.427 18:16:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.427 18:16:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.427 18:16:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.427 18:16:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.427 18:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:39.427 [2024-04-25 18:16:37.252153] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:39.427 [2024-04-25 18:16:37.252256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84586 ] 00:21:39.686 [2024-04-25 18:16:37.393452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.686 [2024-04-25 18:16:37.496145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.623 18:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.623 18:16:38 -- common/autotest_common.sh@852 -- # return 0 00:21:40.623 18:16:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:40.623 18:16:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:40.623 18:16:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:40.881 18:16:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.881 18:16:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:41.139 nvme0n1 00:21:41.139 18:16:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:41.139 18:16:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:41.139 Running I/O for 2 seconds... 00:21:43.674 00:21:43.674 Latency(us) 00:21:43.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.674 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:43.674 nvme0n1 : 2.00 22002.71 85.95 0.00 0.00 5812.57 2532.07 16324.42 00:21:43.674 =================================================================================================================== 00:21:43.674 Total : 22002.71 85.95 0.00 0.00 5812.57 2532.07 16324.42 00:21:43.674 0 00:21:43.674 18:16:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:43.674 18:16:41 -- host/digest.sh@92 -- # get_accel_stats 00:21:43.674 18:16:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:43.674 | select(.opcode=="crc32c") 00:21:43.674 | "\(.module_name) \(.executed)"' 00:21:43.674 18:16:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:43.674 18:16:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:43.674 18:16:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:43.674 18:16:41 -- host/digest.sh@93 -- # exp_module=software 00:21:43.674 18:16:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:43.674 18:16:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:43.674 18:16:41 -- host/digest.sh@97 -- # killprocess 84586 00:21:43.674 18:16:41 -- common/autotest_common.sh@926 -- # '[' -z 84586 ']' 00:21:43.674 18:16:41 -- common/autotest_common.sh@930 -- # kill -0 84586 00:21:43.674 18:16:41 -- common/autotest_common.sh@931 -- # uname 00:21:43.674 18:16:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.674 18:16:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84586 00:21:43.674 18:16:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:43.674 18:16:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:43.674 killing process with pid 84586 00:21:43.674 18:16:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84586' 00:21:43.674 18:16:41 -- common/autotest_common.sh@945 -- # kill 84586 00:21:43.674 Received shutdown signal, test time was about 2.000000 seconds 00:21:43.674 00:21:43.674 Latency(us) 00:21:43.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.674 =================================================================================================================== 00:21:43.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.674 18:16:41 -- common/autotest_common.sh@950 -- # wait 84586 00:21:43.674 18:16:41 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:21:43.674 18:16:41 -- host/digest.sh@77 -- # local rw bs qd 00:21:43.674 18:16:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:43.674 18:16:41 -- host/digest.sh@80 -- # rw=randread 00:21:43.674 18:16:41 -- host/digest.sh@80 -- # bs=131072 00:21:43.674 18:16:41 -- host/digest.sh@80 -- # qd=16 00:21:43.674 18:16:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:43.674 18:16:41 -- host/digest.sh@82 -- # bperfpid=84677 00:21:43.674 18:16:41 -- host/digest.sh@83 -- # waitforlisten 84677 /var/tmp/bperf.sock 00:21:43.674 18:16:41 -- common/autotest_common.sh@819 -- # '[' -z 84677 ']' 00:21:43.674 18:16:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.674 18:16:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:43.674 18:16:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.674 18:16:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:43.674 18:16:41 -- common/autotest_common.sh@10 -- # set +x 00:21:43.674 [2024-04-25 18:16:41.572116] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:43.674 [2024-04-25 18:16:41.572186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84677 ] 00:21:43.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:43.674 Zero copy mechanism will not be used. 00:21:43.933 [2024-04-25 18:16:41.702395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.933 [2024-04-25 18:16:41.779840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.933 18:16:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.933 18:16:41 -- common/autotest_common.sh@852 -- # return 0 00:21:43.933 18:16:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:43.933 18:16:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:43.933 18:16:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:44.191 18:16:42 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.191 18:16:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.758 nvme0n1 00:21:44.758 18:16:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:44.758 18:16:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:44.758 Zero copy mechanism will not be used. 00:21:44.758 Running I/O for 2 seconds... 00:21:46.658 00:21:46.658 Latency(us) 00:21:46.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:46.658 nvme0n1 : 2.00 9912.53 1239.07 0.00 0.00 1611.26 640.47 3217.22 00:21:46.659 =================================================================================================================== 00:21:46.659 Total : 9912.53 1239.07 0.00 0.00 1611.26 640.47 3217.22 00:21:46.659 0 00:21:46.659 18:16:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:46.659 18:16:44 -- host/digest.sh@92 -- # get_accel_stats 00:21:46.659 18:16:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:46.659 18:16:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:46.659 18:16:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:46.659 | select(.opcode=="crc32c") 00:21:46.659 | "\(.module_name) \(.executed)"' 00:21:46.918 18:16:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:46.918 18:16:44 -- host/digest.sh@93 -- # exp_module=software 00:21:46.918 18:16:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:46.918 18:16:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:46.918 18:16:44 -- host/digest.sh@97 -- # killprocess 84677 00:21:46.918 18:16:44 -- common/autotest_common.sh@926 -- # '[' -z 84677 ']' 00:21:46.918 18:16:44 -- common/autotest_common.sh@930 -- # kill -0 84677 00:21:46.918 18:16:44 -- common/autotest_common.sh@931 -- # uname 00:21:46.918 18:16:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:46.918 18:16:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84677 00:21:46.918 18:16:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:46.918 18:16:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:46.918 18:16:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84677' 00:21:46.918 killing process with pid 84677 00:21:46.918 18:16:44 -- common/autotest_common.sh@945 -- # kill 84677 00:21:46.918 Received shutdown signal, test time was about 2.000000 seconds 00:21:46.918 00:21:46.918 Latency(us) 00:21:46.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.918 =================================================================================================================== 00:21:46.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.918 18:16:44 -- common/autotest_common.sh@950 -- # wait 84677 00:21:47.177 18:16:45 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:21:47.177 18:16:45 -- host/digest.sh@77 -- # local rw bs qd 00:21:47.177 18:16:45 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:47.177 18:16:45 -- host/digest.sh@80 -- # rw=randwrite 00:21:47.177 18:16:45 -- host/digest.sh@80 -- # bs=4096 00:21:47.177 18:16:45 -- host/digest.sh@80 -- # qd=128 00:21:47.177 18:16:45 -- host/digest.sh@82 -- # bperfpid=84750 00:21:47.177 18:16:45 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:47.177 18:16:45 -- host/digest.sh@83 -- # waitforlisten 84750 /var/tmp/bperf.sock 00:21:47.177 18:16:45 -- common/autotest_common.sh@819 -- # '[' -z 84750 ']' 00:21:47.177 18:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.177 18:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.177 18:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.177 18:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.177 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.177 [2024-04-25 18:16:45.081968] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:47.177 [2024-04-25 18:16:45.082061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84750 ] 00:21:47.435 [2024-04-25 18:16:45.221088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.435 [2024-04-25 18:16:45.308130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.371 18:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.371 18:16:46 -- common/autotest_common.sh@852 -- # return 0 00:21:48.371 18:16:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:48.371 18:16:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:48.371 18:16:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:48.629 18:16:46 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.629 18:16:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.888 nvme0n1 00:21:48.888 18:16:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:48.888 18:16:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.888 Running I/O for 2 seconds... 00:21:50.792 00:21:50.792 Latency(us) 00:21:50.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.792 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.792 nvme0n1 : 2.01 27296.02 106.63 0.00 0.00 4684.51 1846.92 8281.37 00:21:50.792 =================================================================================================================== 00:21:50.792 Total : 27296.02 106.63 0.00 0.00 4684.51 1846.92 8281.37 00:21:50.792 0 00:21:50.792 18:16:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:50.792 18:16:48 -- host/digest.sh@92 -- # get_accel_stats 00:21:50.792 18:16:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:50.792 18:16:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:50.792 | select(.opcode=="crc32c") 00:21:50.792 | "\(.module_name) \(.executed)"' 00:21:50.792 18:16:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:51.051 18:16:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:51.051 18:16:48 -- host/digest.sh@93 -- # exp_module=software 00:21:51.051 18:16:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:51.051 18:16:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:51.051 18:16:48 -- host/digest.sh@97 -- # killprocess 84750 00:21:51.051 18:16:48 -- common/autotest_common.sh@926 -- # '[' -z 84750 ']' 00:21:51.051 18:16:48 -- common/autotest_common.sh@930 -- # kill -0 84750 00:21:51.051 18:16:48 -- common/autotest_common.sh@931 -- # uname 00:21:51.051 18:16:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:51.051 18:16:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84750 00:21:51.051 18:16:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:51.051 18:16:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:51.051 18:16:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84750' 00:21:51.051 killing process with pid 84750 00:21:51.051 18:16:48 -- common/autotest_common.sh@945 -- # kill 84750 00:21:51.051 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.051 00:21:51.051 Latency(us) 00:21:51.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.051 =================================================================================================================== 00:21:51.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.051 18:16:48 -- common/autotest_common.sh@950 -- # wait 84750 00:21:51.310 18:16:49 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:21:51.310 18:16:49 -- host/digest.sh@77 -- # local rw bs qd 00:21:51.310 18:16:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:51.310 18:16:49 -- host/digest.sh@80 -- # rw=randwrite 00:21:51.310 18:16:49 -- host/digest.sh@80 -- # bs=131072 00:21:51.310 18:16:49 -- host/digest.sh@80 -- # qd=16 00:21:51.311 18:16:49 -- host/digest.sh@82 -- # bperfpid=84840 00:21:51.311 18:16:49 -- host/digest.sh@83 -- # waitforlisten 84840 /var/tmp/bperf.sock 00:21:51.311 18:16:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:51.311 18:16:49 -- common/autotest_common.sh@819 -- # '[' -z 84840 ']' 00:21:51.311 18:16:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:51.311 18:16:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:51.311 18:16:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:51.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:51.311 18:16:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:51.311 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:21:51.570 [2024-04-25 18:16:49.259937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:51.570 [2024-04-25 18:16:49.260037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84840 ] 00:21:51.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:51.570 Zero copy mechanism will not be used. 00:21:51.570 [2024-04-25 18:16:49.398245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.570 [2024-04-25 18:16:49.473121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.506 18:16:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:52.506 18:16:50 -- common/autotest_common.sh@852 -- # return 0 00:21:52.506 18:16:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:52.506 18:16:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:52.506 18:16:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:52.764 18:16:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.764 18:16:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:53.022 nvme0n1 00:21:53.022 18:16:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:53.022 18:16:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:53.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:53.022 Zero copy mechanism will not be used. 00:21:53.022 Running I/O for 2 seconds... 00:21:54.948 00:21:54.948 Latency(us) 00:21:54.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.948 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:54.948 nvme0n1 : 2.00 8250.32 1031.29 0.00 0.00 1934.80 1556.48 10545.34 00:21:54.948 =================================================================================================================== 00:21:54.948 Total : 8250.32 1031.29 0.00 0.00 1934.80 1556.48 10545.34 00:21:54.948 0 00:21:54.948 18:16:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:54.948 18:16:52 -- host/digest.sh@92 -- # get_accel_stats 00:21:54.948 18:16:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:54.948 18:16:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:54.948 18:16:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:54.948 | select(.opcode=="crc32c") 00:21:54.948 | "\(.module_name) \(.executed)"' 00:21:55.206 18:16:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:55.206 18:16:53 -- host/digest.sh@93 -- # exp_module=software 00:21:55.206 18:16:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:55.206 18:16:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:55.206 18:16:53 -- host/digest.sh@97 -- # killprocess 84840 00:21:55.206 18:16:53 -- common/autotest_common.sh@926 -- # '[' -z 84840 ']' 00:21:55.206 18:16:53 -- common/autotest_common.sh@930 -- # kill -0 84840 00:21:55.206 18:16:53 -- common/autotest_common.sh@931 -- # uname 00:21:55.206 18:16:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.206 18:16:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84840 00:21:55.206 18:16:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:55.206 18:16:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:55.206 18:16:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84840' 00:21:55.206 killing process with pid 84840 00:21:55.206 Received shutdown signal, test time was about 2.000000 seconds 00:21:55.206 00:21:55.206 Latency(us) 00:21:55.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.206 =================================================================================================================== 00:21:55.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.206 18:16:53 -- common/autotest_common.sh@945 -- # kill 84840 00:21:55.206 18:16:53 -- common/autotest_common.sh@950 -- # wait 84840 00:21:55.465 18:16:53 -- host/digest.sh@126 -- # killprocess 84536 00:21:55.465 18:16:53 -- common/autotest_common.sh@926 -- # '[' -z 84536 ']' 00:21:55.465 18:16:53 -- common/autotest_common.sh@930 -- # kill -0 84536 00:21:55.465 18:16:53 -- common/autotest_common.sh@931 -- # uname 00:21:55.465 18:16:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.465 18:16:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84536 00:21:55.465 18:16:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:55.465 18:16:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:55.465 killing process with pid 84536 00:21:55.465 18:16:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84536' 00:21:55.465 18:16:53 -- common/autotest_common.sh@945 -- # kill 84536 00:21:55.465 18:16:53 -- common/autotest_common.sh@950 -- # wait 84536 00:21:55.723 00:21:55.724 real 0m17.567s 00:21:55.724 user 0m32.656s 00:21:55.724 sys 0m4.718s 00:21:55.724 18:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.724 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.724 ************************************ 00:21:55.724 END TEST nvmf_digest_clean 00:21:55.724 ************************************ 00:21:55.724 18:16:53 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:21:55.724 18:16:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:55.724 18:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.724 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.982 ************************************ 00:21:55.982 START TEST nvmf_digest_error 00:21:55.982 ************************************ 00:21:55.982 18:16:53 -- common/autotest_common.sh@1104 -- # run_digest_error 00:21:55.982 18:16:53 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:21:55.982 18:16:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:55.982 18:16:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:55.982 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.982 18:16:53 -- nvmf/common.sh@469 -- # nvmfpid=84952 00:21:55.982 18:16:53 -- nvmf/common.sh@470 -- # waitforlisten 84952 00:21:55.982 18:16:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:55.982 18:16:53 -- common/autotest_common.sh@819 -- # '[' -z 84952 ']' 00:21:55.983 18:16:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.983 18:16:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:55.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.983 18:16:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.983 18:16:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:55.983 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.983 [2024-04-25 18:16:53.714717] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:55.983 [2024-04-25 18:16:53.714796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.983 [2024-04-25 18:16:53.843440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.241 [2024-04-25 18:16:53.930762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.241 [2024-04-25 18:16:53.930910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.241 [2024-04-25 18:16:53.930922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.241 [2024-04-25 18:16:53.930930] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.241 [2024-04-25 18:16:53.930961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.809 18:16:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:56.809 18:16:54 -- common/autotest_common.sh@852 -- # return 0 00:21:56.809 18:16:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:56.809 18:16:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:56.809 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 18:16:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.809 18:16:54 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:56.809 18:16:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.809 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 [2024-04-25 18:16:54.703418] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:56.809 18:16:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.809 18:16:54 -- host/digest.sh@104 -- # common_target_config 00:21:56.809 18:16:54 -- host/digest.sh@43 -- # rpc_cmd 00:21:56.809 18:16:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.809 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 null0 00:21:57.068 [2024-04-25 18:16:54.814541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.068 [2024-04-25 18:16:54.838666] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.068 18:16:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.068 18:16:54 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:21:57.068 18:16:54 -- host/digest.sh@54 -- # local rw bs qd 00:21:57.068 18:16:54 -- host/digest.sh@56 -- # rw=randread 00:21:57.068 18:16:54 -- host/digest.sh@56 -- # bs=4096 00:21:57.068 18:16:54 -- host/digest.sh@56 -- # qd=128 00:21:57.068 18:16:54 -- host/digest.sh@58 -- # bperfpid=84996 00:21:57.068 18:16:54 -- host/digest.sh@60 -- # waitforlisten 84996 /var/tmp/bperf.sock 00:21:57.068 18:16:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:57.068 18:16:54 -- common/autotest_common.sh@819 -- # '[' -z 84996 ']' 00:21:57.068 18:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:57.068 18:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:57.068 18:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:57.068 18:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.068 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 [2024-04-25 18:16:54.901709] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:57.068 [2024-04-25 18:16:54.901798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84996 ] 00:21:57.327 [2024-04-25 18:16:55.032818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.327 [2024-04-25 18:16:55.119713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.892 18:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.892 18:16:55 -- common/autotest_common.sh@852 -- # return 0 00:21:57.892 18:16:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:57.892 18:16:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:58.150 18:16:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:58.150 18:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.150 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:58.150 18:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.150 18:16:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:58.150 18:16:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:58.409 nvme0n1 00:21:58.409 18:16:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:58.409 18:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.409 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 18:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.409 18:16:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:58.409 18:16:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:58.668 Running I/O for 2 seconds... 00:21:58.668 [2024-04-25 18:16:56.388712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.668 [2024-04-25 18:16:56.388770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.668 [2024-04-25 18:16:56.388783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.668 [2024-04-25 18:16:56.402646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.668 [2024-04-25 18:16:56.402695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.668 [2024-04-25 18:16:56.402707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.668 [2024-04-25 18:16:56.412068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.668 [2024-04-25 18:16:56.412116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.668 [2024-04-25 18:16:56.412128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.668 [2024-04-25 18:16:56.423983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.668 [2024-04-25 18:16:56.424031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.668 [2024-04-25 18:16:56.424043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.668 [2024-04-25 18:16:56.434037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.668 [2024-04-25 18:16:56.434085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.668 [2024-04-25 18:16:56.434097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.668 [2024-04-25 18:16:56.447438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.447485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.447497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.459884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.459934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.469605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.469669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.469697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.481945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.481993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.482005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.494153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.494200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.494211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.506416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.506461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.506473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.518700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.518747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.518758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.531395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.531441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.531453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.543291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.543337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.543348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.552894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.552941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.552952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.562476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.562522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.562534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.572712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.583354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.583401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.583413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.669 [2024-04-25 18:16:56.592666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.669 [2024-04-25 18:16:56.592712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.669 [2024-04-25 18:16:56.592723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.602909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.602955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.602967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.614473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.614519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.614530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.623997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.624045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.624056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.634160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.634206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.634217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.644448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.644495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.644507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.653746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.653793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.653804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.666179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.666226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.666237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.675901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.675949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.675960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.685250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.685321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.695116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.695163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.695174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.707149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.707196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.707208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.717040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.717099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.727821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.727868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.727879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.928 [2024-04-25 18:16:56.738361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.928 [2024-04-25 18:16:56.738407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.928 [2024-04-25 18:16:56.738418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.748133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.748180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.748191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.760797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.760845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.760857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.770787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.770833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.770844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.780760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.780808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.780819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.791035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.791081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.791092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.802317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.802374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.802387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.811763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.811810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.811821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.821885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.821932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.821944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.831873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.831921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.831933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.841441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.841489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.841500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.929 [2024-04-25 18:16:56.849987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:58.929 [2024-04-25 18:16:56.850034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.929 [2024-04-25 18:16:56.850046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.861431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.861480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.861492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.872404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.872451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.872462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.881922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.881969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.881980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.891903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.891949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.891960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.902159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.902207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.912726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.188 [2024-04-25 18:16:56.912774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.188 [2024-04-25 18:16:56.912785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.188 [2024-04-25 18:16:56.922354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.922415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.922428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.935490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.935541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.935560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.946761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.957556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.957605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.957616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.967185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.967232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.967243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.979453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.979499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.979511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:56.992516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:56.992563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:56.992575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.005926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.005972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.005983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.017570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.017619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.017661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.026869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.026916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.026927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.039079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.039126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.039137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.052169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.052215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.052226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.064158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.064230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.076411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.076457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.088725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.088772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.088783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.101923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.101969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.101981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.189 [2024-04-25 18:16:57.114201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.189 [2024-04-25 18:16:57.114249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.189 [2024-04-25 18:16:57.114260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.448 [2024-04-25 18:16:57.123303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.448 [2024-04-25 18:16:57.123348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.448 [2024-04-25 18:16:57.123359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.448 [2024-04-25 18:16:57.136375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.448 [2024-04-25 18:16:57.136420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.136431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.148530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.148576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.148587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.161256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.161314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.173547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.173596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.173622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.185395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.185430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.185443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.197090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.197138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.197150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.208543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.208592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.208605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.218985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.219033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.229487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.229537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.229565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.241820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.241869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.241896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.252519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.252567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.252579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.263070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.263118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.263130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.274300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.274356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.274368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.284790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.284848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.293998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.294047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.294058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.305514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.305576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.305589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.314757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.314805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.314816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.325367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.325400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.325412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.336317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.336375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.349303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.349380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.349394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.362179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.362226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.362237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.449 [2024-04-25 18:16:57.376156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.449 [2024-04-25 18:16:57.376203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.449 [2024-04-25 18:16:57.376215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.388873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.388919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.388931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.398968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.399018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.399030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.409491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.409540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.409568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.418660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.418706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.418718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.431853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.431899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.431910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.444927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.444974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.444986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.457676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.457722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.470600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.470647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.470658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.482800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.482846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.482858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.492653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.492701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.492712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.506375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.506421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.506433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.516445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.516491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.516503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.526327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.526372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.526384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.535830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.535877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.535888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.545486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.545534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.545560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.555958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.556004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.556016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.567757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.567804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.567815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.578829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.578875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.578887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.590476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.590523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.590534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.599723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.599781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.612052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.612100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.612111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.622683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.622742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.709 [2024-04-25 18:16:57.634704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.709 [2024-04-25 18:16:57.634750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.709 [2024-04-25 18:16:57.634762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.646089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.646135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.655644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.655692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.655719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.665935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.665993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.677681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.677728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.686830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.686877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.699591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.699639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.699651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.708013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.708059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.708071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.720828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.720876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.720887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.733095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.733142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.733153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.743778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.743824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.743835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.752215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.752277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.764598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.764646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.764673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.777100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.777148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.777159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.789594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.789669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.801801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.801846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.801857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.813789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.813835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.813846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.823329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.823376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.835344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.835390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.835402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.847429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.847476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.847487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.859847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.859893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.859905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.871831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.871877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.871889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.883353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.969 [2024-04-25 18:16:57.883400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.969 [2024-04-25 18:16:57.883411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.969 [2024-04-25 18:16:57.893870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:21:59.970 [2024-04-25 18:16:57.893916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.970 [2024-04-25 18:16:57.893927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.903421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.903467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.903478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.914979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.915026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.915038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.927590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.927638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.942113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.955558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.955608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.955621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.965711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.965757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.976015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.976062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.976074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.985612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.985689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.985700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:57.997922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:57.997969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:57.997980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:58.010859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:58.010938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:58.010950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:58.023020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:58.023068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.229 [2024-04-25 18:16:58.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.229 [2024-04-25 18:16:58.032947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.229 [2024-04-25 18:16:58.032997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.033009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.048143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.048193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.048205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.056767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.056815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.056827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.070549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.070596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.070607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.082262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.082318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.082330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.095246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.095303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.095315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.107884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.107930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.107942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.116767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.116814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.116825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.128803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.128851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.128862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.141085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.141131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.141143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.230 [2024-04-25 18:16:58.154319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.230 [2024-04-25 18:16:58.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.230 [2024-04-25 18:16:58.154387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.167568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.489 [2024-04-25 18:16:58.167614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.489 [2024-04-25 18:16:58.167625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.180316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.489 [2024-04-25 18:16:58.180362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.489 [2024-04-25 18:16:58.180373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.193484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.489 [2024-04-25 18:16:58.193532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.489 [2024-04-25 18:16:58.193558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.201898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.489 [2024-04-25 18:16:58.201945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.489 [2024-04-25 18:16:58.201956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.214715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.489 [2024-04-25 18:16:58.214762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.489 [2024-04-25 18:16:58.214773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.489 [2024-04-25 18:16:58.226487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.226545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.239237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.239297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.239310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.251675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.251722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.251734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.264889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.264936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.264947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.277437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.277488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.277501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.287410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.287455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.287466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.300419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.300465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.300476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.311314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.311360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.311371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.320486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.320532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.329809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.329856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.329867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.339320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.339366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.339378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.350235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.350282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.361615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.361693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.361705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 [2024-04-25 18:16:58.371157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e0230) 00:22:00.490 [2024-04-25 18:16:58.371204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.490 [2024-04-25 18:16:58.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.490 00:22:00.490 Latency(us) 00:22:00.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.490 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:00.490 nvme0n1 : 2.00 22565.44 88.15 0.00 0.00 5665.90 2129.92 17039.36 00:22:00.490 =================================================================================================================== 00:22:00.490 Total : 22565.44 88.15 0.00 0.00 5665.90 2129.92 17039.36 00:22:00.490 0 00:22:00.490 18:16:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:00.490 18:16:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:00.490 | .driver_specific 00:22:00.490 | .nvme_error 00:22:00.490 | .status_code 00:22:00.490 | .command_transient_transport_error' 00:22:00.490 18:16:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:00.490 18:16:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:00.749 18:16:58 -- host/digest.sh@71 -- # (( 177 > 0 )) 00:22:00.749 18:16:58 -- host/digest.sh@73 -- # killprocess 84996 00:22:00.749 18:16:58 -- common/autotest_common.sh@926 -- # '[' -z 84996 ']' 00:22:00.749 18:16:58 -- common/autotest_common.sh@930 -- # kill -0 84996 00:22:00.749 18:16:58 -- common/autotest_common.sh@931 -- # uname 00:22:00.749 18:16:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.749 18:16:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84996 00:22:01.007 18:16:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:01.007 18:16:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:01.007 killing process with pid 84996 00:22:01.007 18:16:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84996' 00:22:01.007 18:16:58 -- common/autotest_common.sh@945 -- # kill 84996 00:22:01.007 Received shutdown signal, test time was about 2.000000 seconds 00:22:01.007 00:22:01.007 Latency(us) 00:22:01.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.007 =================================================================================================================== 00:22:01.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.007 18:16:58 -- common/autotest_common.sh@950 -- # wait 84996 00:22:01.007 18:16:58 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:01.007 18:16:58 -- host/digest.sh@54 -- # local rw bs qd 00:22:01.007 18:16:58 -- host/digest.sh@56 -- # rw=randread 00:22:01.008 18:16:58 -- host/digest.sh@56 -- # bs=131072 00:22:01.008 18:16:58 -- host/digest.sh@56 -- # qd=16 00:22:01.008 18:16:58 -- host/digest.sh@58 -- # bperfpid=85083 00:22:01.008 18:16:58 -- host/digest.sh@60 -- # waitforlisten 85083 /var/tmp/bperf.sock 00:22:01.008 18:16:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:01.008 18:16:58 -- common/autotest_common.sh@819 -- # '[' -z 85083 ']' 00:22:01.008 18:16:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.008 18:16:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.008 18:16:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.008 18:16:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.008 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:22:01.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:01.266 Zero copy mechanism will not be used. 00:22:01.266 [2024-04-25 18:16:58.975723] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:01.266 [2024-04-25 18:16:58.975816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85083 ] 00:22:01.266 [2024-04-25 18:16:59.113099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.525 [2024-04-25 18:16:59.200855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.093 18:16:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.093 18:16:59 -- common/autotest_common.sh@852 -- # return 0 00:22:02.093 18:16:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:02.093 18:16:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:02.351 18:17:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:02.351 18:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.351 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:22:02.351 18:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.351 18:17:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:02.351 18:17:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:02.610 nvme0n1 00:22:02.610 18:17:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:02.610 18:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.610 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:22:02.610 18:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.610 18:17:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:02.610 18:17:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:02.610 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:02.610 Zero copy mechanism will not be used. 00:22:02.610 Running I/O for 2 seconds... 00:22:02.610 [2024-04-25 18:17:00.513258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.513330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.513345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.516530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.516567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.516579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.520613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.520678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.520706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.524026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.524074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.524086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.528012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.528060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.528072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.531654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.531702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.531714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.535040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.535088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.535100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.610 [2024-04-25 18:17:00.538420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.610 [2024-04-25 18:17:00.538467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.610 [2024-04-25 18:17:00.538479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.611 [2024-04-25 18:17:00.542320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.611 [2024-04-25 18:17:00.542381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.611 [2024-04-25 18:17:00.542394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.546818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.546852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.546863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.550470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.550518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.550530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.554484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.554533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.554544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.558493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.558541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.558553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.562407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.562454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.562466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.565794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.565841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.565852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.569508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.569572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.569583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.572595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.572642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.572653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.576116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.576162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.576173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.579330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.579377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.579389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.582636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.582683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.582695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.586397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.586444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.586456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.589682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.589730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.589742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.592849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.592896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.592908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.596083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.596132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.596143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.599574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.599623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.603210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.603257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.603269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.606244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.606300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.606313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.609769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.609817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.609828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.613165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.613237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.613266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.616097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.616143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.616154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.619203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.619251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.619263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.623030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.623077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.623089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.626377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.626423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.626435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.629569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.872 [2024-04-25 18:17:00.629616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.872 [2024-04-25 18:17:00.629627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.872 [2024-04-25 18:17:00.633252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.633311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.633324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.636635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.636683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.640221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.640269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.640296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.643639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.643686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.643698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.647228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.647278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.647301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.651267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.651339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.651352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.654536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.654582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.654594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.658253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.658311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.658323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.661934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.661981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.661992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.665907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.665954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.665966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.669423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.673090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.673138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.673149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.676083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.676131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.676143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.679130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.679177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.679189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.682974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.683022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.683034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.686663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.686721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.690223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.690269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.690281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.693106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.693152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.696555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.696603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.696615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.699945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.699993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.700004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.702992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.703039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.703051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.706643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.706690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.709621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.709668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.709695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.713253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.713314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.713327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.715976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.716023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.716035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.719879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.719926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.873 [2024-04-25 18:17:00.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.873 [2024-04-25 18:17:00.723680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.873 [2024-04-25 18:17:00.723727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.723738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.727249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.727307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.727319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.731105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.731152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.731164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.734393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.734438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.737546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.737594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.741181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.741253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.741266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.743787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.747666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.747713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.747725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.751539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.751587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.751599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.754637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.754697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.758436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.758482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.758494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.761810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.761856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.764869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.764916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.764927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.768105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.768152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.768164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.772235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.772282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.772320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.776076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.776122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.776134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.779069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.779117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.779128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.782638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.782686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.782697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.785823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.785870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.785881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.789471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.789519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.789545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.792639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.792687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.792714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.796344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.796389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.874 [2024-04-25 18:17:00.799957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:02.874 [2024-04-25 18:17:00.800006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.874 [2024-04-25 18:17:00.800017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.804097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.804159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.807592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.807639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.807650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.811373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.811433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.814548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.814596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.814608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.817870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.817917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.817929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.821357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.821390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.821402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.135 [2024-04-25 18:17:00.824816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.135 [2024-04-25 18:17:00.824862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.135 [2024-04-25 18:17:00.824873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.828511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.828558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.828570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.831762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.831807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.831819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.835356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.835402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.838973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.839019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.839030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.842727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.842773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.842784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.846204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.846250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.846262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.849582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.849643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.849670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.853230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.853291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.853305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.856211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.856261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.856274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.860798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.860847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.860859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.864080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.864129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.864141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.867942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.867988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.868000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.871518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.871567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.871580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.875417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.875466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.875478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.879092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.879140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.879152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.882201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.882248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.882260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.885854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.885900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.885912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.889262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.889319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.889333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.892843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.892890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.892901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.897003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.897052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.897064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.901391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.901443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.901456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.905598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.905647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.905674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.909032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.909093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.912616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.912667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.912694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.916797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.916846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.916858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.920269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.920343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.920355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.136 [2024-04-25 18:17:00.923891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.136 [2024-04-25 18:17:00.923938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.136 [2024-04-25 18:17:00.923950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.927328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.927374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.927386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.930750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.930799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.930810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.934405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.934454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.934467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.937623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.937702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.937715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.941073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.941120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.944344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.944389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.944401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.947880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.947928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.947940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.951922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.951972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.956146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.956205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.959907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.959955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.959966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.964038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.964086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.964098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.967382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.967429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.967441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.971364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.971410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.971422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.975600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.975648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.975660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.978896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.978944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.978956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.982405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.982465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.985831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.985880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.985892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.989822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.989870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.989882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.993974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.994023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:00.997561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:00.997611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:00.997653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.001115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.001163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.001175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.005246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.005294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.005308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.009047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.009094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.009105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.013065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.013113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.013125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.016680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.016739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.020458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.020509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.137 [2024-04-25 18:17:01.020523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.137 [2024-04-25 18:17:01.024546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.137 [2024-04-25 18:17:01.024596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.024609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.029190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.029281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.033321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.033370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.033383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.036259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.036333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.036362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.041336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.041387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.041400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.044870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.044917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.044928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.047768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.047817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.047828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.051221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.051281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.055191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.055241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.059221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.059270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.059282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.138 [2024-04-25 18:17:01.062882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.138 [2024-04-25 18:17:01.062933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.138 [2024-04-25 18:17:01.062960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.399 [2024-04-25 18:17:01.067147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.399 [2024-04-25 18:17:01.067210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.399 [2024-04-25 18:17:01.067222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.399 [2024-04-25 18:17:01.071046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.399 [2024-04-25 18:17:01.071093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.399 [2024-04-25 18:17:01.071104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.399 [2024-04-25 18:17:01.074870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.399 [2024-04-25 18:17:01.074935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.399 [2024-04-25 18:17:01.074948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.399 [2024-04-25 18:17:01.078858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.399 [2024-04-25 18:17:01.078905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.399 [2024-04-25 18:17:01.078916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.083024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.083071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.083082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.086876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.086923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.090354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.090400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.090412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.093913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.093961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.098102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.098149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.098161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.101632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.101696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.101708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.105178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.105233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.108510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.108558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.108570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.111831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.111878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.111889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.115279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.115336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.115348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.118783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.118830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.118841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.122053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.122110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.125572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.125633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.128815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.128861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.128872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.132072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.132119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.132130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.135273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.135329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.135342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.138377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.138425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.138436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.141249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.141307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.141319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.144900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.144947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.144958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.148695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.148742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.148754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.151685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.151742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.155333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.155381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.155392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.159019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.159066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.159077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.162310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.162366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.162377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.166415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.166473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.166486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.170087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.170135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.400 [2024-04-25 18:17:01.170146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.400 [2024-04-25 18:17:01.173834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.400 [2024-04-25 18:17:01.173881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.173893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.177512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.177590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.177602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.180800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.180845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.180856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.184348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.184394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.184405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.187199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.187246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.187258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.190665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.190728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.190740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.193810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.193856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.193867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.197475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.197524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.197551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.200943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.200991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.201002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.204730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.204778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.204789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.208415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.208462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.208473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.211581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.211628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.211640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.215335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.215382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.218867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.218914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.218925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.222076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.222122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.222133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.225426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.225474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.225486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.229248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.229309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.229322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.232132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.232179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.232190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.236241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.236317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.239737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.239784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.239795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.243782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.243830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.243841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.247257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.247317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.247328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.250657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.250719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.250731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.253894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.253942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.253953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.257188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.257262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.257287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.260570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.260629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.264104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.401 [2024-04-25 18:17:01.264152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.401 [2024-04-25 18:17:01.264163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.401 [2024-04-25 18:17:01.267826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.267874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.267885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.271129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.271175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.271187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.274726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.274773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.274784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.278093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.278139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.278151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.281695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.281742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.281754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.284907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.284953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.284964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.288025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.288070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.288082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.291723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.291769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.291780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.295226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.295273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.295284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.298873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.298920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.298931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.302471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.302518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.302529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.305796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.305842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.305854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.308827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.308874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.311683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.311731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.311742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.315168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.315216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.315227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.318964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.319011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.319022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.322185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.322232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.322243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.402 [2024-04-25 18:17:01.325914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.402 [2024-04-25 18:17:01.325962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.402 [2024-04-25 18:17:01.325973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.330371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.330447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.334440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.334486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.334497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.337468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.337505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.337533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.341451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.341485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.341512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.344400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.344447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.347872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.347919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.347931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.351512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.351559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.351570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.355219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.355267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.355278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.358400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.358445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.358457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.361689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.361737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.361748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.365282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.365340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.365352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.368508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.368555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.368566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.371510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.371568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.374974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.375021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.375032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.378422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.378468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.381724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.381770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.381781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.664 [2024-04-25 18:17:01.385274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.664 [2024-04-25 18:17:01.385332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.664 [2024-04-25 18:17:01.385344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.388780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.388826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.388837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.392505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.392551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.392562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.395437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.395483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.395494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.399310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.402781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.402827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.402840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.405188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.405277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.409164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.409233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.409261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.412045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.412092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.412102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.415667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.415715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.415726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.419248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.419307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.419319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.423178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.423227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.423239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.427084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.427134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.427147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.430681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.430728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.430739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.433924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.433969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.433981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.437260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.437305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.437318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.440621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.443574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.443609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.443638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.446735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.446769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.446797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.450635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.450669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.450698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.454023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.454058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.454086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.457393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.457431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.457445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.460753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.460790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.460818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.464259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.464303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.464331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.467816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.467852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.467881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.471594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.471628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.471656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.474563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.474597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.665 [2024-04-25 18:17:01.474625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.665 [2024-04-25 18:17:01.478402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.665 [2024-04-25 18:17:01.478435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.478463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.481798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.481831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.481859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.485328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.485363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.485376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.487792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.487826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.487854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.491425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.491459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.494695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.494729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.494757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.498224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.498259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.498298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.501439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.501476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.501505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.504900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.504935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.508419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.508453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.508482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.511978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.512013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.512041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.515628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.515666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.515695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.518965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.519000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.519028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.522220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.522255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.522283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.525869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.525905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.525933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.529468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.529505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.529549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.532885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.532919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.532946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.536889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.536924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.536952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.540350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.540413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.543886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.543921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.543949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.547623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.547657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.547686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.551349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.551383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.551411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.554843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.554907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.558595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.558632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.558661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.562665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.562700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.562729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.565944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.565979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.566008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.569715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.666 [2024-04-25 18:17:01.569750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.666 [2024-04-25 18:17:01.569778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.666 [2024-04-25 18:17:01.573457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.573494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.667 [2024-04-25 18:17:01.576766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.576800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.576829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.667 [2024-04-25 18:17:01.580540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.580575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.667 [2024-04-25 18:17:01.583560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.583595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.583623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.667 [2024-04-25 18:17:01.586957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.586991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.587019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.667 [2024-04-25 18:17:01.591144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.667 [2024-04-25 18:17:01.591181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.667 [2024-04-25 18:17:01.591210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.594544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.929 [2024-04-25 18:17:01.594580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.929 [2024-04-25 18:17:01.594609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.598447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.929 [2024-04-25 18:17:01.598482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.929 [2024-04-25 18:17:01.598511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.601700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.929 [2024-04-25 18:17:01.601735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.929 [2024-04-25 18:17:01.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.605500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.929 [2024-04-25 18:17:01.605569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.929 [2024-04-25 18:17:01.605598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.609026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.929 [2024-04-25 18:17:01.609061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.929 [2024-04-25 18:17:01.609090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.929 [2024-04-25 18:17:01.612633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.612669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.612713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.616143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.616178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.616206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.619728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.619762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.619790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.623146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.623180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.623208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.626663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.626714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.626742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.629480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.629516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.632605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.632640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.632669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.636061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.636096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.636124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.639420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.639454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.639483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.643050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.643084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.643113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.646723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.646759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.646787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.650254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.650328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.650342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.653417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.653453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.653482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.657089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.657124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.657153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.660731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.660769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.660798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.664721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.664757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.664785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.669498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.673845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.673880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.673908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.677236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.677283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.677299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.680330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.680364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.680392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.684247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.684307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.684321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.688125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.688163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.688192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.691790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.691824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.691853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.695553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.695588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.695617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.699006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.699041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.702788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.930 [2024-04-25 18:17:01.702823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.930 [2024-04-25 18:17:01.702852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.930 [2024-04-25 18:17:01.705980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.706015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.706044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.710057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.713625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.713661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.713706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.716969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.717004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.717033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.720790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.721000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.724527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.724563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.724592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.727884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.727920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.727949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.731487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.731692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.735582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.735782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.739460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.743140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.743204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.746464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.746500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.746528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.750147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.750182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.750210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.753959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.754023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.757248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.757310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.757326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.761441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.761479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.761525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.765017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.765051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.768886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.768922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.768951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.772197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.772231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.772260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.775818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.775853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.775882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.779529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.779566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.779596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.782612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.782646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.782675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.786074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.786137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.789951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.789987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.790015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.793929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.793964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.793993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.797876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.797912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.931 [2024-04-25 18:17:01.797940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.931 [2024-04-25 18:17:01.802182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.931 [2024-04-25 18:17:01.802218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.802246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.805953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.806032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.809837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.809872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.813856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.813891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.813919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.817968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.818003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.818032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.821996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.822031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.822060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.825376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.825414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.825427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.828310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.828343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.828371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.831207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.831243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.831271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.834555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.834589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.834618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.837821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.837855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.837883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.840964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.841014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.841042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.844928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.845134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.845292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.848449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.848485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.848513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.851927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.851961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.851990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.855405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.855486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:03.932 [2024-04-25 18:17:01.859187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:03.932 [2024-04-25 18:17:01.859226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.932 [2024-04-25 18:17:01.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.863090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.863132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.863162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.867028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.867068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.867082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.871187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.871228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.874949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.874990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.878587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.878625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.878656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.882634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.882687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.211 [2024-04-25 18:17:01.882717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.211 [2024-04-25 18:17:01.886806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.211 [2024-04-25 18:17:01.886844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.889951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.889987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.890017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.893589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.893625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.893654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.897259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.897307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.897321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.900547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.900580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.900609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.903782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.903816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.903845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.907726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.907762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.907792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.911122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.911157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.914841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.914876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.914905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.917963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.917999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.918028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.921625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.921722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.925035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.925069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.925098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.928101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.928135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.928164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.931179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.931215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.931244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.934927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.934962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.934992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.937815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.937849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.937879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.942240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.946124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.946164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.946195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.950378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.950415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.950445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.954857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.954926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.958918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.958984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.962838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.962875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.962904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.966623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.966657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.966686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.969944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.969979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.972765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.972802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.972832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.976235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.976294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.976332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.212 [2024-04-25 18:17:01.979524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.212 [2024-04-25 18:17:01.979557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.212 [2024-04-25 18:17:01.979586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:01.983683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:01.983718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:01.983747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:01.987485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:01.987521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:01.987550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:01.990965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:01.991001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:01.991030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:01.994564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:01.994598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:01.994627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:01.998139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:01.998175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:01.998203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.001797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.001832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.005097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.005133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.005161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.008566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.008603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.008633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.011926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.011964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.011994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.015831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.015897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.019524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.019562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.019593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.023278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.023373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.023388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.027107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.027143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.027171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.030972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.031007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.031036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.034613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.034680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.034709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.038192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.038225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.038253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.041700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.041733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.041761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.045265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.045329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.045343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.048435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.048472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.048500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.052032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.052067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.052096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.055787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.059376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.059409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.059437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.062747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.062782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.062811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.066627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.066679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.066708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.071097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.071133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.071162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.075204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.075241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.213 [2024-04-25 18:17:02.075269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.213 [2024-04-25 18:17:02.079400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.213 [2024-04-25 18:17:02.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.079467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.082053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.082088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.082117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.086156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.086192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.086222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.089796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.089831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.089860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.093929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.093966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.093995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.097434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.097665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.097698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.101822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.102031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.102306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.105798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.105971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.106004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.110004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.110043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.110073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.113881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.113932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.113961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.116609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.116659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.116688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.120493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.120546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.120574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.123877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.123929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.123959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.127540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.127593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.127622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.130861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.130913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.130942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.134218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.134297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.134312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.138106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.138157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.138185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.214 [2024-04-25 18:17:02.142331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.214 [2024-04-25 18:17:02.142394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.214 [2024-04-25 18:17:02.142424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.146191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.146229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.146258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.150378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.150427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.150457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.154471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.154509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.154537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.157699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.157750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.157779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.161053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.161105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.161134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.164810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.164848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.168329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.168365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.168393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.172088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.172169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.176079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.176131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.176159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.179045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.179096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.179125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.182723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.182776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.186912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.186963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.186991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.190957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.191024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.193966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.194028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.194056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.197606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.197674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.197703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.201632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.201699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.201727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.205357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.205395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.205424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.208651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.208688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.208716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.211956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.212007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.212036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.215902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.215954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.215982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.219889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.219940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.219969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.224037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.224089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.224117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.229124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.229162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.229192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.233204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.233260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.233287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.236594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.236672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.240249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.240313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.240342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.243979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.244030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.244060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.247733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.247783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.247812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.251193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.251244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.251272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.254792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.254844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.254873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.258634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.258670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.262181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.262233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.262262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.265439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.265477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.265507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.268960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.269013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.269042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.272466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.272504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.272534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.276390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.276433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.279701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.279752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.279781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.283236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.283328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.283342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.287068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.475 [2024-04-25 18:17:02.287119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.475 [2024-04-25 18:17:02.287147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.475 [2024-04-25 18:17:02.290693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.290743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.290771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.294129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.294179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.294207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.297831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.297883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.297910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.300982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.301032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.301059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.304416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.304466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.304494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.307990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.308041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.308069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.311186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.311238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.315266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.315343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.315373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.318922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.318973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.322941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.322992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.323020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.326589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.326626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.326654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.330545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.330598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.330627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.334067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.334117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.334145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.337662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.337729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.337757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.340490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.340540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.340569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.343416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.343465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.343492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.346971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.347020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.350663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.350712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.350740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.353840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.353888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.353916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.357038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.357088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.357115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.360766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.360816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.360844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.364113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.364163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.364192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.367461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.367510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.367539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.371379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.371428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.371456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.374750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.374827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.378399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.378450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.382179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.382230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.382258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.385459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.385497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.385541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.388754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.388833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.392398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.392450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.392478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.396058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.396110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.396139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.399428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.399479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.476 [2024-04-25 18:17:02.403321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.476 [2024-04-25 18:17:02.403367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.476 [2024-04-25 18:17:02.403398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.406935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.406971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.406999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.410677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.410741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.414598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.414649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.418212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.418263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.418303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.421380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.421419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.421448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.424903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.424953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.424982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.428191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.428242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.428270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.430972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.431022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.431050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.434364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.434414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.434442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.437897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.437946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.437975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.441032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.441083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.441111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.444760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.444811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.444839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.448572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.448623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.448652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.452276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.452337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.452366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.455205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.455255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.455283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.458153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.458203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.458231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.461858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.461908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.461937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.465297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.465358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.465387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.469124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.469174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.469226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.473354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.473406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.736 [2024-04-25 18:17:02.473435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.736 [2024-04-25 18:17:02.476982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.736 [2024-04-25 18:17:02.477032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.477060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.480830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.480880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.484974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.485041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.489128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.489181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.489201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.493368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.493421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.496783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.496834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.496878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.500534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.500569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.500597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.504196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.504248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.504277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.737 [2024-04-25 18:17:02.507825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1992a30) 00:22:04.737 [2024-04-25 18:17:02.507876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.737 [2024-04-25 18:17:02.507904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.737 00:22:04.737 Latency(us) 00:22:04.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.737 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:04.737 nvme0n1 : 2.00 8634.14 1079.27 0.00 0.00 1849.88 618.12 5004.57 00:22:04.737 =================================================================================================================== 00:22:04.737 Total : 8634.14 1079.27 0.00 0.00 1849.88 618.12 5004.57 00:22:04.737 0 00:22:04.737 18:17:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:04.737 18:17:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:04.737 18:17:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:04.737 | .driver_specific 00:22:04.737 | .nvme_error 00:22:04.737 | .status_code 00:22:04.737 | .command_transient_transport_error' 00:22:04.737 18:17:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:04.996 18:17:02 -- host/digest.sh@71 -- # (( 557 > 0 )) 00:22:04.996 18:17:02 -- host/digest.sh@73 -- # killprocess 85083 00:22:04.996 18:17:02 -- common/autotest_common.sh@926 -- # '[' -z 85083 ']' 00:22:04.996 18:17:02 -- common/autotest_common.sh@930 -- # kill -0 85083 00:22:04.996 18:17:02 -- common/autotest_common.sh@931 -- # uname 00:22:04.996 18:17:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:04.996 18:17:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85083 00:22:04.996 18:17:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:04.996 killing process with pid 85083 00:22:04.996 18:17:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:04.996 18:17:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85083' 00:22:04.996 Received shutdown signal, test time was about 2.000000 seconds 00:22:04.996 00:22:04.996 Latency(us) 00:22:04.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.996 =================================================================================================================== 00:22:04.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.996 18:17:02 -- common/autotest_common.sh@945 -- # kill 85083 00:22:04.996 18:17:02 -- common/autotest_common.sh@950 -- # wait 85083 00:22:05.255 18:17:02 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:05.255 18:17:02 -- host/digest.sh@54 -- # local rw bs qd 00:22:05.255 18:17:02 -- host/digest.sh@56 -- # rw=randwrite 00:22:05.255 18:17:02 -- host/digest.sh@56 -- # bs=4096 00:22:05.255 18:17:02 -- host/digest.sh@56 -- # qd=128 00:22:05.255 18:17:02 -- host/digest.sh@58 -- # bperfpid=85168 00:22:05.255 18:17:02 -- host/digest.sh@60 -- # waitforlisten 85168 /var/tmp/bperf.sock 00:22:05.255 18:17:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:05.255 18:17:02 -- common/autotest_common.sh@819 -- # '[' -z 85168 ']' 00:22:05.255 18:17:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:05.255 18:17:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:05.255 18:17:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:05.255 18:17:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.255 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 [2024-04-25 18:17:03.041097] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:05.255 [2024-04-25 18:17:03.041756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85168 ] 00:22:05.255 [2024-04-25 18:17:03.178056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.514 [2024-04-25 18:17:03.262581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.080 18:17:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.080 18:17:03 -- common/autotest_common.sh@852 -- # return 0 00:22:06.080 18:17:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:06.080 18:17:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:06.339 18:17:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:06.339 18:17:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.339 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:22:06.339 18:17:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.339 18:17:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:06.339 18:17:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:06.597 nvme0n1 00:22:06.597 18:17:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:06.597 18:17:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.597 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:22:06.597 18:17:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.597 18:17:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:06.597 18:17:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:06.856 Running I/O for 2 seconds... 00:22:06.856 [2024-04-25 18:17:04.604556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eea00 00:22:06.856 [2024-04-25 18:17:04.605906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.856 [2024-04-25 18:17:04.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.856 [2024-04-25 18:17:04.616944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fb8b8 00:22:06.856 [2024-04-25 18:17:04.617969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.856 [2024-04-25 18:17:04.618018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.856 [2024-04-25 18:17:04.627395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6b70 00:22:06.856 [2024-04-25 18:17:04.628934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.856 [2024-04-25 18:17:04.628981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.856 [2024-04-25 18:17:04.638430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0ea0 00:22:06.856 [2024-04-25 18:17:04.639493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.856 [2024-04-25 18:17:04.639539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:06.856 [2024-04-25 18:17:04.646110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f35f0 00:22:06.856 [2024-04-25 18:17:04.646214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.856 [2024-04-25 18:17:04.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:06.856 [2024-04-25 18:17:04.657134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0630 00:22:06.857 [2024-04-25 18:17:04.657454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.657508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.666769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dece0 00:22:06.857 [2024-04-25 18:17:04.667006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.667026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.676345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e12d8 00:22:06.857 [2024-04-25 18:17:04.676658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.676693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.686237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2948 00:22:06.857 [2024-04-25 18:17:04.687305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.687361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.695832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed920 00:22:06.857 [2024-04-25 18:17:04.696072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.696097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.705135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2d80 00:22:06.857 [2024-04-25 18:17:04.705413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.714603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed920 00:22:06.857 [2024-04-25 18:17:04.714807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.714825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.723906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190de8a8 00:22:06.857 [2024-04-25 18:17:04.724101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.724120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.733328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e4140 00:22:06.857 [2024-04-25 18:17:04.733755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.733781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.743438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3060 00:22:06.857 [2024-04-25 18:17:04.744704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.744749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.753081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6458 00:22:06.857 [2024-04-25 18:17:04.753671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.762847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4298 00:22:06.857 [2024-04-25 18:17:04.763514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.763545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.772376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f20d8 00:22:06.857 [2024-04-25 18:17:04.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.773396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:06.857 [2024-04-25 18:17:04.781310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190de038 00:22:06.857 [2024-04-25 18:17:04.782073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.857 [2024-04-25 18:17:04.782106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.793652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f0350 00:22:07.117 [2024-04-25 18:17:04.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.794545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.803450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fcdd0 00:22:07.117 [2024-04-25 18:17:04.804331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.804384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.812875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e1710 00:22:07.117 [2024-04-25 18:17:04.814022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.814067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.823737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed920 00:22:07.117 [2024-04-25 18:17:04.824801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.830890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2948 00:22:07.117 [2024-04-25 18:17:04.831043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.831061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.842489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f92c0 00:22:07.117 [2024-04-25 18:17:04.843260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.843328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.852092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e49b0 00:22:07.117 [2024-04-25 18:17:04.852992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.853037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.860702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e7c50 00:22:07.117 [2024-04-25 18:17:04.861138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.861177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.873724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6738 00:22:07.117 [2024-04-25 18:17:04.874840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.874883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.880784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190de8a8 00:22:07.117 [2024-04-25 18:17:04.881000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.881023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.891616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6fa8 00:22:07.117 [2024-04-25 18:17:04.892322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.892366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.900491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ee5c8 00:22:07.117 [2024-04-25 18:17:04.901703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.901747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.910089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f9f68 00:22:07.117 [2024-04-25 18:17:04.910406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.910430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.919546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1430 00:22:07.117 [2024-04-25 18:17:04.920058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.920090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.928818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f31b8 00:22:07.117 [2024-04-25 18:17:04.929222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.929245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.938105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6458 00:22:07.117 [2024-04-25 18:17:04.938492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.117 [2024-04-25 18:17:04.938516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.117 [2024-04-25 18:17:04.947426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3060 00:22:07.118 [2024-04-25 18:17:04.947778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.947801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:04.956777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190efae0 00:22:07.118 [2024-04-25 18:17:04.957083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.957107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:04.966292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fef90 00:22:07.118 [2024-04-25 18:17:04.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.966621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:04.975703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190efae0 00:22:07.118 [2024-04-25 18:17:04.975956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.975980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:04.984990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3060 00:22:07.118 [2024-04-25 18:17:04.985298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.985322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:04.994342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fd640 00:22:07.118 [2024-04-25 18:17:04.994960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:04.994992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:05.003639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ef6a8 00:22:07.118 [2024-04-25 18:17:05.004295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:05.004339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:05.014364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ea680 00:22:07.118 [2024-04-25 18:17:05.015073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:05.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:05.022797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f5378 00:22:07.118 [2024-04-25 18:17:05.023625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:05.023670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:05.032149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e49b0 00:22:07.118 [2024-04-25 18:17:05.032945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:05.032975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.118 [2024-04-25 18:17:05.042774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e49b0 00:22:07.118 [2024-04-25 18:17:05.043500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.118 [2024-04-25 18:17:05.043529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.052493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f96f8 00:22:07.378 [2024-04-25 18:17:05.053825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.062973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eaef0 00:22:07.378 [2024-04-25 18:17:05.063483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.063521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.076733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e1f80 00:22:07.378 [2024-04-25 18:17:05.077992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.078036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.084788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fa3a0 00:22:07.378 [2024-04-25 18:17:05.084981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.085000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.096154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e12d8 00:22:07.378 [2024-04-25 18:17:05.096572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.096594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.105907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fac10 00:22:07.378 [2024-04-25 18:17:05.106647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.106678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.114318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f81e0 00:22:07.378 [2024-04-25 18:17:05.114529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.114548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.126420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eb328 00:22:07.378 [2024-04-25 18:17:05.127189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.127219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.136085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ec408 00:22:07.378 [2024-04-25 18:17:05.137839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.137883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.144666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190edd58 00:22:07.378 [2024-04-25 18:17:05.145903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.145947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.154356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e73e0 00:22:07.378 [2024-04-25 18:17:05.154794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.154819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.165821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eb328 00:22:07.378 [2024-04-25 18:17:05.166925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.166970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.174288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ee190 00:22:07.378 [2024-04-25 18:17:05.175520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.175565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.183668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fc998 00:22:07.378 [2024-04-25 18:17:05.184161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.184191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.193059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e7c50 00:22:07.378 [2024-04-25 18:17:05.193795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.193843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.201455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190df118 00:22:07.378 [2024-04-25 18:17:05.201617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.201652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.213109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fe2e8 00:22:07.378 [2024-04-25 18:17:05.213962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.213993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.378 [2024-04-25 18:17:05.221811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6890 00:22:07.378 [2024-04-25 18:17:05.222872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.378 [2024-04-25 18:17:05.222916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.230741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f92c0 00:22:07.379 [2024-04-25 18:17:05.230819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.230838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.241499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0ea0 00:22:07.379 [2024-04-25 18:17:05.242655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.242714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.252239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eaab8 00:22:07.379 [2024-04-25 18:17:05.253433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.259370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fd208 00:22:07.379 [2024-04-25 18:17:05.259587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.270126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f31b8 00:22:07.379 [2024-04-25 18:17:05.270912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.270943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.279475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed4e8 00:22:07.379 [2024-04-25 18:17:05.280626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.280670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.289076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ddc00 00:22:07.379 [2024-04-25 18:17:05.289554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.289578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.300659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ee5c8 00:22:07.379 [2024-04-25 18:17:05.301815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.379 [2024-04-25 18:17:05.301858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:07.379 [2024-04-25 18:17:05.309645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f0788 00:22:07.638 [2024-04-25 18:17:05.311029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.311074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.319990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fb8b8 00:22:07.638 [2024-04-25 18:17:05.320657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.320688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.329051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190df988 00:22:07.638 [2024-04-25 18:17:05.330189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.330250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.338925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4298 00:22:07.638 [2024-04-25 18:17:05.339272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.339303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.348498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2948 00:22:07.638 [2024-04-25 18:17:05.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.349009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.358263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6cc8 00:22:07.638 [2024-04-25 18:17:05.359419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.359462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.368371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fe2e8 00:22:07.638 [2024-04-25 18:17:05.369451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.369480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.379153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eee38 00:22:07.638 [2024-04-25 18:17:05.380275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.380344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.388656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ec408 00:22:07.638 [2024-04-25 18:17:05.389930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.389975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.398248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0630 00:22:07.638 [2024-04-25 18:17:05.399369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.399422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.409075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f7538 00:22:07.638 [2024-04-25 18:17:05.410209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.416268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e99d8 00:22:07.638 [2024-04-25 18:17:05.416440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.416458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.427079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4f40 00:22:07.638 [2024-04-25 18:17:05.427718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.427748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:07.638 [2024-04-25 18:17:05.437272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2510 00:22:07.638 [2024-04-25 18:17:05.438085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.638 [2024-04-25 18:17:05.438144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.445954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e1710 00:22:07.639 [2024-04-25 18:17:05.447299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.455552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f5378 00:22:07.639 [2024-04-25 18:17:05.456189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.456217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.465138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f8e88 00:22:07.639 [2024-04-25 18:17:05.465946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.465977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.474518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2510 00:22:07.639 [2024-04-25 18:17:05.475569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.475611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.485110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e84c0 00:22:07.639 [2024-04-25 18:17:05.486139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.486183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.492353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ef270 00:22:07.639 [2024-04-25 18:17:05.492439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.492458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.503664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1ca0 00:22:07.639 [2024-04-25 18:17:05.504236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.504265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.513131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6b70 00:22:07.639 [2024-04-25 18:17:05.514749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.514812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.522551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f5378 00:22:07.639 [2024-04-25 18:17:05.523235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.523265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.530892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e8088 00:22:07.639 [2024-04-25 18:17:05.531021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.531041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.542492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed4e8 00:22:07.639 [2024-04-25 18:17:05.543274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.543340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.550980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ecc78 00:22:07.639 [2024-04-25 18:17:05.551861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.551906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.560454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f81e0 00:22:07.639 [2024-04-25 18:17:05.561700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.639 [2024-04-25 18:17:05.561745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.639 [2024-04-25 18:17:05.571005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5658 00:22:07.898 [2024-04-25 18:17:05.571943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.571990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.580684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fac10 00:22:07.898 [2024-04-25 18:17:05.581866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.581910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.590193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3498 00:22:07.898 [2024-04-25 18:17:05.591122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.591168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.601383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e9168 00:22:07.898 [2024-04-25 18:17:05.602101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.602132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.611045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3d08 00:22:07.898 [2024-04-25 18:17:05.611700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.611723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.620583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fcdd0 00:22:07.898 [2024-04-25 18:17:05.621285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.621338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.630084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f81e0 00:22:07.898 [2024-04-25 18:17:05.631562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.631606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.639778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4298 00:22:07.898 [2024-04-25 18:17:05.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.640580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.650572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190efae0 00:22:07.898 [2024-04-25 18:17:05.651278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.651337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.661870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6458 00:22:07.898 [2024-04-25 18:17:05.663708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.663770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.672343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fbcf0 00:22:07.898 [2024-04-25 18:17:05.673183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.673239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.682441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1ca0 00:22:07.898 [2024-04-25 18:17:05.683941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.683987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.692893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4b08 00:22:07.898 [2024-04-25 18:17:05.693978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.694024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.702675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ff3c8 00:22:07.898 [2024-04-25 18:17:05.704047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.704092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.712868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ea248 00:22:07.898 [2024-04-25 18:17:05.714053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.724743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eb760 00:22:07.898 [2024-04-25 18:17:05.725964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.732400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3d08 00:22:07.898 [2024-04-25 18:17:05.732618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.732638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.744737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e84c0 00:22:07.898 [2024-04-25 18:17:05.745555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.745586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.754671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f7da8 00:22:07.898 [2024-04-25 18:17:05.755431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.755491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.763337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fac10 00:22:07.898 [2024-04-25 18:17:05.764143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.764203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.774910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e99d8 00:22:07.898 [2024-04-25 18:17:05.775781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.775826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.784951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6738 00:22:07.898 [2024-04-25 18:17:05.785811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.795124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed0b0 00:22:07.898 [2024-04-25 18:17:05.795948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.795979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.805185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6cc8 00:22:07.898 [2024-04-25 18:17:05.805954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.898 [2024-04-25 18:17:05.805986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:07.898 [2024-04-25 18:17:05.815360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4f40 00:22:07.898 [2024-04-25 18:17:05.816043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.899 [2024-04-25 18:17:05.816073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:07.899 [2024-04-25 18:17:05.825145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dfdc0 00:22:07.899 [2024-04-25 18:17:05.825884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.899 [2024-04-25 18:17:05.825917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.835011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fef90 00:22:08.159 [2024-04-25 18:17:05.836781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.836825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.845510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190df988 00:22:08.159 [2024-04-25 18:17:05.846255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.846355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.853707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f7da8 00:22:08.159 [2024-04-25 18:17:05.854781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.854825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.863310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e1b48 00:22:08.159 [2024-04-25 18:17:05.863471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.863491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.872642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2d80 00:22:08.159 [2024-04-25 18:17:05.872764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.884405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f3a28 00:22:08.159 [2024-04-25 18:17:05.885530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.159 [2024-04-25 18:17:05.885558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:08.159 [2024-04-25 18:17:05.891450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e3060 00:22:08.159 [2024-04-25 18:17:05.891623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.891642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.902490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1ca0 00:22:08.160 [2024-04-25 18:17:05.902883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.902907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.912455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190feb58 00:22:08.160 [2024-04-25 18:17:05.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.923265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ff3c8 00:22:08.160 [2024-04-25 18:17:05.924455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.924500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.932809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dfdc0 00:22:08.160 [2024-04-25 18:17:05.934523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.934568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.942270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5220 00:22:08.160 [2024-04-25 18:17:05.943779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.943823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.950507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6fa8 00:22:08.160 [2024-04-25 18:17:05.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.959493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ebb98 00:22:08.160 [2024-04-25 18:17:05.959832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.959856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.968767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fc560 00:22:08.160 [2024-04-25 18:17:05.968895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.968914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.978786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f8e88 00:22:08.160 [2024-04-25 18:17:05.980017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.980060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:05.988523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fa7d8 00:22:08.160 [2024-04-25 18:17:05.988985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:05.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.000118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f4b08 00:22:08.160 [2024-04-25 18:17:06.001264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.001334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.007263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f9b30 00:22:08.160 [2024-04-25 18:17:06.007485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.007504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.018933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190df118 00:22:08.160 [2024-04-25 18:17:06.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.019871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.027419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e84c0 00:22:08.160 [2024-04-25 18:17:06.028488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.028536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.037032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fa3a0 00:22:08.160 [2024-04-25 18:17:06.037468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.037494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.048506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f57b0 00:22:08.160 [2024-04-25 18:17:06.049635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.049694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.056920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5658 00:22:08.160 [2024-04-25 18:17:06.058121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.058165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.066801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190df988 00:22:08.160 [2024-04-25 18:17:06.067860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.067904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.075985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f0788 00:22:08.160 [2024-04-25 18:17:06.077478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.077507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:08.160 [2024-04-25 18:17:06.086327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dfdc0 00:22:08.160 [2024-04-25 18:17:06.088029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.160 [2024-04-25 18:17:06.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.098334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eee38 00:22:08.419 [2024-04-25 18:17:06.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.099913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.108971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f7100 00:22:08.419 [2024-04-25 18:17:06.110204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.110250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.120385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2510 00:22:08.419 [2024-04-25 18:17:06.121586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.121615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.127243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dece0 00:22:08.419 [2024-04-25 18:17:06.128129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.128175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.138871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fda78 00:22:08.419 [2024-04-25 18:17:06.139741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.139786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.147494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0a68 00:22:08.419 [2024-04-25 18:17:06.148521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.156798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed4e8 00:22:08.419 [2024-04-25 18:17:06.158431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.158477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.166411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f8a50 00:22:08.419 [2024-04-25 18:17:06.167331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.167387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.176924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5220 00:22:08.419 [2024-04-25 18:17:06.178144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.178188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.186471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ee190 00:22:08.419 [2024-04-25 18:17:06.187637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.187680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.197083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f8a50 00:22:08.419 [2024-04-25 18:17:06.198245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.198346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.204225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ec408 00:22:08.419 [2024-04-25 18:17:06.204507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.419 [2024-04-25 18:17:06.204532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:08.419 [2024-04-25 18:17:06.216705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e9e10 00:22:08.420 [2024-04-25 18:17:06.217660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.217705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.225142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f2510 00:22:08.420 [2024-04-25 18:17:06.226190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.234697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190efae0 00:22:08.420 [2024-04-25 18:17:06.235616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.235661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.245418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6300 00:22:08.420 [2024-04-25 18:17:06.246307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.246362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.254391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1430 00:22:08.420 [2024-04-25 18:17:06.255722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.255765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.264378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e1710 00:22:08.420 [2024-04-25 18:17:06.264956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.264985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.276218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6fa8 00:22:08.420 [2024-04-25 18:17:06.277481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.277523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.283214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190eff18 00:22:08.420 [2024-04-25 18:17:06.283540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.283564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.294655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed4e8 00:22:08.420 [2024-04-25 18:17:06.295636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.295680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.302176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ea680 00:22:08.420 [2024-04-25 18:17:06.302257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.302294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.315893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e9168 00:22:08.420 [2024-04-25 18:17:06.316809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.316838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.326951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e27f0 00:22:08.420 [2024-04-25 18:17:06.328289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.328349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.337467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e9e10 00:22:08.420 [2024-04-25 18:17:06.337971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.338036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:08.420 [2024-04-25 18:17:06.349452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6cc8 00:22:08.420 [2024-04-25 18:17:06.350666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.420 [2024-04-25 18:17:06.350714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.357435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f20d8 00:22:08.679 [2024-04-25 18:17:06.357755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.357802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.368899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fc128 00:22:08.679 [2024-04-25 18:17:06.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.370664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.378375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fb048 00:22:08.679 [2024-04-25 18:17:06.379257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.387075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5220 00:22:08.679 [2024-04-25 18:17:06.387399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.387441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.398845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e0a68 00:22:08.679 [2024-04-25 18:17:06.399697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.399777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.408317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f0bc0 00:22:08.679 [2024-04-25 18:17:06.409138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.409208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.417961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f6890 00:22:08.679 [2024-04-25 18:17:06.418748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.418797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.426409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f5be8 00:22:08.679 [2024-04-25 18:17:06.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.436065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ebb98 00:22:08.679 [2024-04-25 18:17:06.436534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.436569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.446951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e7818 00:22:08.679 [2024-04-25 18:17:06.447953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.448001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.455587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f7100 00:22:08.679 [2024-04-25 18:17:06.456711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.456757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.464497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5658 00:22:08.679 [2024-04-25 18:17:06.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.464675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.473856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed920 00:22:08.679 [2024-04-25 18:17:06.474004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.474022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.483262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6300 00:22:08.679 [2024-04-25 18:17:06.483421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.483440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:08.679 [2024-04-25 18:17:06.492770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190fda78 00:22:08.679 [2024-04-25 18:17:06.492912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.679 [2024-04-25 18:17:06.492932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.502494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f8e88 00:22:08.680 [2024-04-25 18:17:06.503034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.511992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e5ec8 00:22:08.680 [2024-04-25 18:17:06.512894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.512941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.522870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190dece0 00:22:08.680 [2024-04-25 18:17:06.523744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.523790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.533965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ee190 00:22:08.680 [2024-04-25 18:17:06.535188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.535234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.541109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e4de8 00:22:08.680 [2024-04-25 18:17:06.541552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.541610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.552583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190f1868 00:22:08.680 [2024-04-25 18:17:06.553507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.553572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.562093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190e6fa8 00:22:08.680 [2024-04-25 18:17:06.563012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.563059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.571803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ef6a8 00:22:08.680 [2024-04-25 18:17:06.572748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.572796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.581143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed0b0 00:22:08.680 [2024-04-25 18:17:06.582126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.582176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.680 [2024-04-25 18:17:06.590785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2a10) with pdu=0x2000190ed0b0 00:22:08.680 [2024-04-25 18:17:06.591794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.680 [2024-04-25 18:17:06.591841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.680 00:22:08.680 Latency(us) 00:22:08.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.680 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:08.680 nvme0n1 : 2.00 25868.53 101.05 0.00 0.00 4941.71 1839.48 14239.19 00:22:08.680 =================================================================================================================== 00:22:08.680 Total : 25868.53 101.05 0.00 0.00 4941.71 1839.48 14239.19 00:22:08.680 0 00:22:08.938 18:17:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:08.938 18:17:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:08.938 18:17:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:08.938 | .driver_specific 00:22:08.938 | .nvme_error 00:22:08.938 | .status_code 00:22:08.938 | .command_transient_transport_error' 00:22:08.938 18:17:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:08.938 18:17:06 -- host/digest.sh@71 -- # (( 203 > 0 )) 00:22:08.938 18:17:06 -- host/digest.sh@73 -- # killprocess 85168 00:22:08.938 18:17:06 -- common/autotest_common.sh@926 -- # '[' -z 85168 ']' 00:22:08.938 18:17:06 -- common/autotest_common.sh@930 -- # kill -0 85168 00:22:08.938 18:17:06 -- common/autotest_common.sh@931 -- # uname 00:22:08.938 18:17:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.938 18:17:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85168 00:22:09.196 killing process with pid 85168 00:22:09.196 Received shutdown signal, test time was about 2.000000 seconds 00:22:09.196 00:22:09.196 Latency(us) 00:22:09.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.196 =================================================================================================================== 00:22:09.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.196 18:17:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.196 18:17:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.196 18:17:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85168' 00:22:09.196 18:17:06 -- common/autotest_common.sh@945 -- # kill 85168 00:22:09.196 18:17:06 -- common/autotest_common.sh@950 -- # wait 85168 00:22:09.196 18:17:07 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:09.196 18:17:07 -- host/digest.sh@54 -- # local rw bs qd 00:22:09.196 18:17:07 -- host/digest.sh@56 -- # rw=randwrite 00:22:09.196 18:17:07 -- host/digest.sh@56 -- # bs=131072 00:22:09.196 18:17:07 -- host/digest.sh@56 -- # qd=16 00:22:09.196 18:17:07 -- host/digest.sh@58 -- # bperfpid=85267 00:22:09.196 18:17:07 -- host/digest.sh@60 -- # waitforlisten 85267 /var/tmp/bperf.sock 00:22:09.196 18:17:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:09.196 18:17:07 -- common/autotest_common.sh@819 -- # '[' -z 85267 ']' 00:22:09.196 18:17:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:09.196 18:17:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.196 18:17:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:09.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:09.197 18:17:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.197 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.456 [2024-04-25 18:17:07.183089] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:09.456 [2024-04-25 18:17:07.183678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85267 ] 00:22:09.456 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:09.456 Zero copy mechanism will not be used. 00:22:09.456 [2024-04-25 18:17:07.322319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.715 [2024-04-25 18:17:07.405803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.282 18:17:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.282 18:17:08 -- common/autotest_common.sh@852 -- # return 0 00:22:10.282 18:17:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:10.282 18:17:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:10.540 18:17:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:10.540 18:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.540 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:10.540 18:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.540 18:17:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:10.540 18:17:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:10.799 nvme0n1 00:22:10.799 18:17:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:10.800 18:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.800 18:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 18:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.800 18:17:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:10.800 18:17:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:10.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:10.800 Zero copy mechanism will not be used. 00:22:10.800 Running I/O for 2 seconds... 00:22:10.800 [2024-04-25 18:17:08.671530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.671872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.675838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.676006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.676041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.680114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.680271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.680292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.684219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.684401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.684465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.688424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.688525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.688545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.692878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.692977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.692997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.697594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.697765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.701741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.702011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.702046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.705868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.706142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.706179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.710085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.710241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.710262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.714210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.714345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.718463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.718562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.718583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.722526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.722638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.722657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.726727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.726866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.726886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.800 [2024-04-25 18:17:08.731254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:10.800 [2024-04-25 18:17:08.731446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.800 [2024-04-25 18:17:08.731469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.735841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.736130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.736171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.740157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.740414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.740462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.744323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.748398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.748545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.752409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.752531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.752550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.756457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.756588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.760563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.760698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.760718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.764776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.764924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.764944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.768927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.769170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.769248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.773048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.773341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.773363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.777098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.777300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.777322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.781186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.781339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.785164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.785321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.785342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.789359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.793364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.793482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.793502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.797443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.797575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.797611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.801458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.801715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.801761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.061 [2024-04-25 18:17:08.805447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.061 [2024-04-25 18:17:08.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-04-25 18:17:08.805761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.809720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.809881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.813807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.813920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.813940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.817934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.818064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.821966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.822116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.826168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.826316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.826337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.830352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.830495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.830514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.834485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.834720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.834773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.838601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.838943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.838982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.842743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.842884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.846924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.847026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.847045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.850958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.851100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.854987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.855104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.855124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.859125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.859274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.859309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.863455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.863609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.867733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.867949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.867969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.871979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.872206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.872226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.876094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.876240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.876260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.880237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.880384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.880404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.884221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.884371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.884392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.888273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.888426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.888447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.892493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.892623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.892644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.896436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.896578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.896599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.900619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.900839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.900860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.904719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.904948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.904969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.908867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.909023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.909044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.912971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.062 [2024-04-25 18:17:08.913101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-04-25 18:17:08.913121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.062 [2024-04-25 18:17:08.916925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.920981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.921090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.921110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.924973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.925119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.925139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.929105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.933356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.933669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.937336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.937514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.941439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.941639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.941682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.945553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.945652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.945672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.949784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.949904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.949924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.954245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.954390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.954410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.958698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.958838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.958858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.962828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.962969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.962989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.967046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.967263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.967316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.971156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.971394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.971414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.975211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.975378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.975399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.979346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.979449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.979469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.983437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.983555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.987439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.987538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.987559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.063 [2024-04-25 18:17:08.991957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.063 [2024-04-25 18:17:08.992104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.063 [2024-04-25 18:17:08.992124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.324 [2024-04-25 18:17:08.996222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.324 [2024-04-25 18:17:08.996396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.324 [2024-04-25 18:17:08.996416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.324 [2024-04-25 18:17:09.000789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.324 [2024-04-25 18:17:09.001027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.324 [2024-04-25 18:17:09.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.324 [2024-04-25 18:17:09.004851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.324 [2024-04-25 18:17:09.005064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.324 [2024-04-25 18:17:09.005085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.324 [2024-04-25 18:17:09.009080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.324 [2024-04-25 18:17:09.009269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.324 [2024-04-25 18:17:09.009306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.324 [2024-04-25 18:17:09.013359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.324 [2024-04-25 18:17:09.013471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.324 [2024-04-25 18:17:09.013508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.017327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.017429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.017449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.021260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.025364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.025501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.029392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.029545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.029579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.033440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.033698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.033745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.037359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.037637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.037672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.041560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.041690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.041710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.045589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.045675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.045695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.049594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.049753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.049774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.053719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.053853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.053872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.057765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.057916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.057935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.061924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.062068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.062088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.066134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.066353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.066373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.070225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.070470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.070549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.074300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.074426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.074447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.078570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.078682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.078702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.082728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.082827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.082846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.086821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.086942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.086962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.090984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.091125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.091145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.094998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.095166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.095202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.099222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.099518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.099555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.103454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.103709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.103752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.107562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.107722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.107743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.111674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.111816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.111836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.115794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.115927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.115947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.119928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.325 [2024-04-25 18:17:09.120047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.325 [2024-04-25 18:17:09.120067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.325 [2024-04-25 18:17:09.124075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.124214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.124234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.128055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.128200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.128220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.132183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.132494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.132533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.136267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.136556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.140724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.140885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.140906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.145162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.145338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.145361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.149811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.154431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.154547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.154569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.159135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.159291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.159329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.163752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.163898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.168330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.168627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.168680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.172738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.172965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.173019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.176984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.177135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.177155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.181206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.185345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.185436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.185458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.189415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.189498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.189518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.193313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.193471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.197144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.197331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.197352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.201268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.201522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.201555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.205290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.205526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.205564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.209581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.209713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.209765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.214125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.214257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.218526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.218630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.218650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.222530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.222675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.226931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.227058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.227078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.231455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.231604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.231625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.236155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.326 [2024-04-25 18:17:09.236449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.326 [2024-04-25 18:17:09.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.326 [2024-04-25 18:17:09.240704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.327 [2024-04-25 18:17:09.240923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.327 [2024-04-25 18:17:09.240977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.327 [2024-04-25 18:17:09.245240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.327 [2024-04-25 18:17:09.245396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.327 [2024-04-25 18:17:09.245419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.327 [2024-04-25 18:17:09.249587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.327 [2024-04-25 18:17:09.249742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.327 [2024-04-25 18:17:09.249763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.327 [2024-04-25 18:17:09.254374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.327 [2024-04-25 18:17:09.254524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.327 [2024-04-25 18:17:09.254546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.258887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.259016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.263575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.263723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.263759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.267833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.267982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.272150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.272448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.272488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.276373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.276655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.276692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.280555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.280719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.280739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.284651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.284774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.284794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.288782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.288896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.288916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.292995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.293128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.293165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.297270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.297456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.301481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.301614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.301651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.305625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.305913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.309864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.310120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.310153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.314305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.314489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.314542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.318578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.318730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.318750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.322853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.322970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.322990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.326982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.327098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.327118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.331437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.331583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.331604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.335829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.335974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.335995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.340172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.588 [2024-04-25 18:17:09.340461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.588 [2024-04-25 18:17:09.340496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.588 [2024-04-25 18:17:09.344428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.344679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.344710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.348620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.348804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.348825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.352831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.352946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.352967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.357143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.357312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.357335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.361330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.361428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.361449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.365465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.365614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.365634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.369669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.369811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.369831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.374103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.374380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.374418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.378452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.378730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.378766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.382732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.382896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.382916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.386956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.387099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.387118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.391238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.391429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.395579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.395711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.395731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.399807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.399948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.399969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.403996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.404141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.408235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.408472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.408493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.412474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.412718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.412762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.416712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.416854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.416875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.420831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.420964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.420985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.424951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.425067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.425088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.429043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.429160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.429181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.433517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.433663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.589 [2024-04-25 18:17:09.433698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.589 [2024-04-25 18:17:09.437760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.589 [2024-04-25 18:17:09.437907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.437927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.442129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.442354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.442387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.446365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.446634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.446668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.450795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.450935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.450956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.455076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.455206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.455226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.459272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.459424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.459445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.463572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.463715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.463735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.467842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.468013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.468049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.472589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.472763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.472785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.477167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.477460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.477489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.481357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.481632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.481666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.485442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.485679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.489703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.489848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.489868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.493784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.493915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.493934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.497788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.497910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.497930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.501946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.502116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.502135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.505946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.506088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.506108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.510148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.510392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.510412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.514368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.514581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.514618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.590 [2024-04-25 18:17:09.518959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.590 [2024-04-25 18:17:09.519150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.590 [2024-04-25 18:17:09.519169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.523231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.523374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.523394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.527575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.527712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.531726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.531837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.531857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.535761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.535936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.535955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.539762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.539896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.539916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.543966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.544189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.544243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.548122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.548347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.548367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.552217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.552431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.552452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.556226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.556360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.556381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.560191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.560306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.560340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.564347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.564457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.564478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.568567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.568719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.568739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.572619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.572779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.576685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.576898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.576934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.580732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.581003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.581057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.584809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.585032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.585064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.588786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.588901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.588921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.592777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.592895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.592915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.596763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.596892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.852 [2024-04-25 18:17:09.596913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.852 [2024-04-25 18:17:09.600827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.852 [2024-04-25 18:17:09.600993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.601012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.604856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.605013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.608953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.609170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.609190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.613071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.613336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.617305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.617470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.617507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.621365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.621464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.621486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.625187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.625350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.625372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.629169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.629357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.629378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.633473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.633654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.633674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.637467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.637620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.637641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.641611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.641864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.641884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.645656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.645861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.645881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.649782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.649978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.649999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.653873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.654011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.654030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.657906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.658018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.658038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.661968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.662082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.666060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.666226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.666245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.670181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.670331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.670365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.674495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.674713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.674733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.678667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.678935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.678998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.682885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.682983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.683003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.687058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.687181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.853 [2024-04-25 18:17:09.687201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.853 [2024-04-25 18:17:09.691097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.853 [2024-04-25 18:17:09.691211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.691231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.695161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.695293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.695326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.699249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.699447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.699467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.703316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.703469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.703489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.707376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.707595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.707630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.711501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.711725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.711761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.715626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.715812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.715848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.719684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.719822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.719841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.723676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.723828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.727646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.727789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.727808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.732129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.732319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.732340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.736692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.736865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.736886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.740869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.741099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.744871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.745176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.745320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.748798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.748922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.748942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.752858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.752973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.752993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.756839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.756964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.756984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.760808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.760939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.760960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.764841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.765025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.768990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.769139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.773164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.773444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.773468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.777097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.777362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.777384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.854 [2024-04-25 18:17:09.781510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:11.854 [2024-04-25 18:17:09.781767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-04-25 18:17:09.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.115 [2024-04-25 18:17:09.786007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.115 [2024-04-25 18:17:09.786143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.115 [2024-04-25 18:17:09.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.115 [2024-04-25 18:17:09.790106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.115 [2024-04-25 18:17:09.790251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.115 [2024-04-25 18:17:09.790271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.115 [2024-04-25 18:17:09.794386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.115 [2024-04-25 18:17:09.794498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.115 [2024-04-25 18:17:09.794519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.115 [2024-04-25 18:17:09.798421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.115 [2024-04-25 18:17:09.798587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.798607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.802429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.802623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.802643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.806608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.806825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.806845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.810638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.810841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.810861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.814820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.815018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.815038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.818893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.819035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.819054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.823042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.823175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.823198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.827105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.827222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.827242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.831237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.831462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.831484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.835308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.835433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.835453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.839359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.839576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.839611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.843395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.843645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.843724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.847410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.847592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.847612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.851495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.851640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.851674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.855505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.855619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.855638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.859453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.859569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.859588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.863584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.863746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.863766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.867589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.867729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.867749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.871732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.871946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.871966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.875749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.875954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.875973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.879868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.880061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.880081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.884015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.884132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-04-25 18:17:09.888070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.116 [2024-04-25 18:17:09.888188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-04-25 18:17:09.888208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.892133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.892253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.892273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.896173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.896370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.896391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.900274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.900469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.900489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.904363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.904595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.904615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.908333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.908574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.912378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.912550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.912571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.916463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.916596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.920510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.920647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.920668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.924578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.924729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.928540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.928733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.928753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.932598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.932784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.932804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.936740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.936957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.936977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.940659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.940898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.944690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.944875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.944894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.948663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.948808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.952740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.952855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.952875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.956713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.956829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.956850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.960805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.960991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.961012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.964852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.965008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.969069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.969341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.969364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.973092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.973343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.973366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.977056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.977261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.977283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.981075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.117 [2024-04-25 18:17:09.981190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-04-25 18:17:09.981235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-04-25 18:17:09.985025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:09.985136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:09.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:09.988940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:09.989057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:09.989078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:09.993374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:09.993543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:09.993579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:09.997855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:09.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:09.998022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.002352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.002594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.002625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.006528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.006736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.006756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.010724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.010921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.010942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.014855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.014999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.015019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.018913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.019033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.019053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.023666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.023784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.028385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.028560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.028581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.032840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.032999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.033021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.037043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.037337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.041024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.041323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.041362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-04-25 18:17:10.045350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.118 [2024-04-25 18:17:10.045528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-04-25 18:17:10.045550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.049669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.049826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.049845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.053763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.053886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.053906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.057945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.058061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.058081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.062012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.062188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.062239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.066067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.066221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.070187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.070466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.070503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.074180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.074462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.074500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.078470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.078679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.078731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.082601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.082742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.086670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.086794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.086814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.090730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.090843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.090862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-04-25 18:17:10.094850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.378 [2024-04-25 18:17:10.095015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-04-25 18:17:10.095035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.098962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.099144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.099163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.103157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.103441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.103479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.107225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.107488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.107534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.111390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.111599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.111621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.115373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.115497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.115517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.119444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.119576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.123436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.123550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.127494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.127660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.127696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.131524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.131691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.131728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.135678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.135928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.135983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.139731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.140026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.140059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.143863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.144087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.144125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.147934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.148049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.148069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.151995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.152104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.152124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.156075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.156208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.156228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.160476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.160641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.160693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.164807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.164979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.164999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.169408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.169681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.169705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.173969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.174220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.178723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.178908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.183112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.183224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.183244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.187542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.187726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.191913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.192045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.192066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.196414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.196596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.196618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.200838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.201014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.201034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-04-25 18:17:10.205254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.379 [2024-04-25 18:17:10.205541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-04-25 18:17:10.205589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.209484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.209757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.209777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.213633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.213840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.213860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.217799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.217929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.217949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.221805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.221943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.221962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.225868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.225993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.226013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.229977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.230142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.234167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.234330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.234350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.238266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.238516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.238553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.242318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.242560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.246377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.246568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.246588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.250859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.251017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.251037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.255508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.255637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.255658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.259573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.259708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.259728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.263648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.263815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.263835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.267645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.267779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.267799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.271705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.271924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.271944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.275752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.275962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.275982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.279846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.280038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.280057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.283921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.284035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.284054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.287981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.288098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.288118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.292009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.292121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.292141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.296086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.296251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.296271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.300163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.300343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.300363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.304305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.304522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.304543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-04-25 18:17:10.308651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.380 [2024-04-25 18:17:10.308942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-04-25 18:17:10.308991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.312939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.313131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.313151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.317380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.317484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.317507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.321314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.321456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.321477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.325280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.325412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.325434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.329349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.329510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.329560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.333680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.333857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.333877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.337743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.337961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.341696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.341911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.341930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.345827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.346019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.346039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.350017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.350158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.350178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.354190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.354329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.354350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.358176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.358306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.362265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.362484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.362505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.366304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.640 [2024-04-25 18:17:10.366508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-04-25 18:17:10.366529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-04-25 18:17:10.370568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.370803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.370823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.374622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.374886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.374965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.378720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.378925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.382839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.382983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.383004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.386870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.387007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.387027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.390863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.390974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.390994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.394964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.395127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.395147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.399120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.399255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.399274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.403301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.403533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.407255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.407488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.407508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.411389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.411574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.411594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.415489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.415642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.419539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.419670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.419690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.423606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.423740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.423760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.427681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.427845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.427865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.431716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.431870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.435865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.436084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.436104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.439854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.440078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.440097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.443960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.444138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.444158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.448039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.448153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.448172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.452270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.452428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.452448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.456561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.456664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.456684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.461136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.461370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.461392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.465555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.465763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.465784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.470107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.470362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.470384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.474517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-04-25 18:17:10.474834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-04-25 18:17:10.478988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.641 [2024-04-25 18:17:10.479196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.479217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.483457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.483586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.483608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.487823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.487937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.487958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.492081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.492209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.492229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.496582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.496798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.496818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.500775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.500912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.504992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.505247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.505269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.509402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.509609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.509647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.513743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.513947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.513967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.517866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.517984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.521992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.522109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.522129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.526137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.526253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.526274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.530548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.530719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.530739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.534681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.534853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.534873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.538892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.539114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.543038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.543254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.543275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.547117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.547325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.547345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.551324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.551423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.551442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.555409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.555512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.555533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.559582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.559676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.559696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.563743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.563891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.563911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.567873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.568042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.568063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-04-25 18:17:10.572618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.642 [2024-04-25 18:17:10.572831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-04-25 18:17:10.572867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.900 [2024-04-25 18:17:10.576782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.900 [2024-04-25 18:17:10.577011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.900 [2024-04-25 18:17:10.577038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.900 [2024-04-25 18:17:10.581094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.900 [2024-04-25 18:17:10.581320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.900 [2024-04-25 18:17:10.581343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.900 [2024-04-25 18:17:10.585159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.585332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.585354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.589312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.589427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.589449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.593586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.593701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.593721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.597758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.597904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.597924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.601885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.602061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.602080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.606186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.606413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.606439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.610366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.610562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.614643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.614817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.614837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.618870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.618966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.618985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.622918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.623037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.627063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.627157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.627176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.631203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.631360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.631381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.635400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.635553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.639697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.639895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.639915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.643759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.643953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.643973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.647830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.648001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.648021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.652086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.652181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.652201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.656115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.656228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.656249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.901 [2024-04-25 18:17:10.660317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1aa2d50) with pdu=0x2000190fef90 00:22:12.901 [2024-04-25 18:17:10.660413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.901 [2024-04-25 18:17:10.660433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.901 00:22:12.901 Latency(us) 00:22:12.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.901 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:12.901 nvme0n1 : 2.00 7399.13 924.89 0.00 0.00 2157.74 1601.16 8162.21 00:22:12.901 =================================================================================================================== 00:22:12.901 Total : 7399.13 924.89 0.00 0.00 2157.74 1601.16 8162.21 00:22:12.901 0 00:22:12.901 18:17:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:12.901 18:17:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:12.901 18:17:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:12.901 18:17:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:12.901 | .driver_specific 00:22:12.901 | .nvme_error 00:22:12.901 | .status_code 00:22:12.901 | .command_transient_transport_error' 00:22:13.160 18:17:10 -- host/digest.sh@71 -- # (( 477 > 0 )) 00:22:13.160 18:17:10 -- host/digest.sh@73 -- # killprocess 85267 00:22:13.160 18:17:10 -- common/autotest_common.sh@926 -- # '[' -z 85267 ']' 00:22:13.160 18:17:10 -- common/autotest_common.sh@930 -- # kill -0 85267 00:22:13.160 18:17:10 -- common/autotest_common.sh@931 -- # uname 00:22:13.160 18:17:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.160 18:17:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85267 00:22:13.160 killing process with pid 85267 00:22:13.160 Received shutdown signal, test time was about 2.000000 seconds 00:22:13.160 00:22:13.160 Latency(us) 00:22:13.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.160 =================================================================================================================== 00:22:13.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.160 18:17:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:13.160 18:17:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:13.160 18:17:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85267' 00:22:13.160 18:17:10 -- common/autotest_common.sh@945 -- # kill 85267 00:22:13.160 18:17:10 -- common/autotest_common.sh@950 -- # wait 85267 00:22:13.441 18:17:11 -- host/digest.sh@115 -- # killprocess 84952 00:22:13.441 18:17:11 -- common/autotest_common.sh@926 -- # '[' -z 84952 ']' 00:22:13.441 18:17:11 -- common/autotest_common.sh@930 -- # kill -0 84952 00:22:13.441 18:17:11 -- common/autotest_common.sh@931 -- # uname 00:22:13.441 18:17:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.441 18:17:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84952 00:22:13.441 killing process with pid 84952 00:22:13.441 18:17:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:13.441 18:17:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:13.441 18:17:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84952' 00:22:13.441 18:17:11 -- common/autotest_common.sh@945 -- # kill 84952 00:22:13.441 18:17:11 -- common/autotest_common.sh@950 -- # wait 84952 00:22:13.749 ************************************ 00:22:13.749 END TEST nvmf_digest_error 00:22:13.749 ************************************ 00:22:13.749 00:22:13.749 real 0m17.789s 00:22:13.749 user 0m32.987s 00:22:13.749 sys 0m4.765s 00:22:13.749 18:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.749 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.749 18:17:11 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:13.749 18:17:11 -- host/digest.sh@139 -- # nvmftestfini 00:22:13.749 18:17:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.749 18:17:11 -- nvmf/common.sh@116 -- # sync 00:22:13.749 18:17:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.749 18:17:11 -- nvmf/common.sh@119 -- # set +e 00:22:13.749 18:17:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.749 18:17:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.749 rmmod nvme_tcp 00:22:13.749 rmmod nvme_fabrics 00:22:13.749 rmmod nvme_keyring 00:22:13.749 18:17:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.749 18:17:11 -- nvmf/common.sh@123 -- # set -e 00:22:13.749 18:17:11 -- nvmf/common.sh@124 -- # return 0 00:22:13.749 18:17:11 -- nvmf/common.sh@477 -- # '[' -n 84952 ']' 00:22:13.749 18:17:11 -- nvmf/common.sh@478 -- # killprocess 84952 00:22:13.749 18:17:11 -- common/autotest_common.sh@926 -- # '[' -z 84952 ']' 00:22:13.749 18:17:11 -- common/autotest_common.sh@930 -- # kill -0 84952 00:22:13.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (84952) - No such process 00:22:13.749 Process with pid 84952 is not found 00:22:13.749 18:17:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 84952 is not found' 00:22:13.749 18:17:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.749 18:17:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.749 18:17:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.749 18:17:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.749 18:17:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.749 18:17:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.749 18:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.749 18:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.749 18:17:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:13.749 ************************************ 00:22:13.749 END TEST nvmf_digest 00:22:13.749 ************************************ 00:22:13.749 00:22:13.749 real 0m36.009s 00:22:13.749 user 1m5.782s 00:22:13.749 sys 0m9.792s 00:22:13.749 18:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.749 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.749 18:17:11 -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:22:13.749 18:17:11 -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:22:13.749 18:17:11 -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:13.749 18:17:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:13.749 18:17:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.749 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.749 ************************************ 00:22:13.749 START TEST nvmf_mdns_discovery 00:22:13.749 ************************************ 00:22:13.749 18:17:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:14.008 * Looking for test storage... 00:22:14.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:14.008 18:17:11 -- nvmf/common.sh@7 -- # uname -s 00:22:14.008 18:17:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.008 18:17:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.008 18:17:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.008 18:17:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.008 18:17:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.008 18:17:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.008 18:17:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.008 18:17:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.008 18:17:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.008 18:17:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:22:14.008 18:17:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:22:14.008 18:17:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.008 18:17:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.008 18:17:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:14.008 18:17:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.008 18:17:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.008 18:17:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.008 18:17:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.008 18:17:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.008 18:17:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.008 18:17:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.008 18:17:11 -- paths/export.sh@5 -- # export PATH 00:22:14.008 18:17:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.008 18:17:11 -- nvmf/common.sh@46 -- # : 0 00:22:14.008 18:17:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:14.008 18:17:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:14.008 18:17:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:14.008 18:17:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.008 18:17:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.008 18:17:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:14.008 18:17:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:14.008 18:17:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:14.008 18:17:11 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:14.008 18:17:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.008 18:17:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.008 18:17:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.008 18:17:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.008 18:17:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.008 18:17:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.008 18:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.008 18:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.008 18:17:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:14.008 18:17:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:14.008 18:17:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.009 18:17:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.009 18:17:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:14.009 18:17:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:14.009 18:17:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.009 18:17:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.009 18:17:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.009 18:17:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.009 18:17:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.009 18:17:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.009 18:17:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.009 18:17:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.009 18:17:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:14.009 18:17:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:14.009 Cannot find device "nvmf_tgt_br" 00:22:14.009 18:17:11 -- nvmf/common.sh@154 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.009 Cannot find device "nvmf_tgt_br2" 00:22:14.009 18:17:11 -- nvmf/common.sh@155 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:14.009 18:17:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:14.009 Cannot find device "nvmf_tgt_br" 00:22:14.009 18:17:11 -- nvmf/common.sh@157 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:14.009 Cannot find device "nvmf_tgt_br2" 00:22:14.009 18:17:11 -- nvmf/common.sh@158 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:14.009 18:17:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:14.009 18:17:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.009 18:17:11 -- nvmf/common.sh@161 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.009 18:17:11 -- nvmf/common.sh@162 -- # true 00:22:14.009 18:17:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.009 18:17:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.009 18:17:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.009 18:17:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.009 18:17:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.009 18:17:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.268 18:17:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.268 18:17:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:14.268 18:17:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:14.268 18:17:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:14.268 18:17:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:14.268 18:17:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:14.268 18:17:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:14.268 18:17:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.268 18:17:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.268 18:17:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.268 18:17:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:14.268 18:17:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:14.268 18:17:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.268 18:17:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.268 18:17:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.268 18:17:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.268 18:17:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.268 18:17:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:14.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:22:14.268 00:22:14.268 --- 10.0.0.2 ping statistics --- 00:22:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.268 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:14.268 18:17:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:14.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:14.268 00:22:14.268 --- 10.0.0.3 ping statistics --- 00:22:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.268 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:14.268 18:17:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:14.268 00:22:14.268 --- 10.0.0.1 ping statistics --- 00:22:14.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.268 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:14.268 18:17:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.268 18:17:12 -- nvmf/common.sh@421 -- # return 0 00:22:14.268 18:17:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:14.268 18:17:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.268 18:17:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:14.268 18:17:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:14.268 18:17:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.268 18:17:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:14.268 18:17:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:14.268 18:17:12 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:14.268 18:17:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:14.268 18:17:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:14.268 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.268 18:17:12 -- nvmf/common.sh@469 -- # nvmfpid=85550 00:22:14.268 18:17:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:14.268 18:17:12 -- nvmf/common.sh@470 -- # waitforlisten 85550 00:22:14.268 18:17:12 -- common/autotest_common.sh@819 -- # '[' -z 85550 ']' 00:22:14.268 18:17:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.268 18:17:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:14.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.268 18:17:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.268 18:17:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:14.268 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.268 [2024-04-25 18:17:12.176526] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:14.268 [2024-04-25 18:17:12.176605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.526 [2024-04-25 18:17:12.314691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.526 [2024-04-25 18:17:12.401904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.526 [2024-04-25 18:17:12.402020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.526 [2024-04-25 18:17:12.402031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.526 [2024-04-25 18:17:12.402039] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.526 [2024-04-25 18:17:12.402064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.461 18:17:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:15.461 18:17:13 -- common/autotest_common.sh@852 -- # return 0 00:22:15.461 18:17:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:15.461 18:17:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:15.461 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.461 18:17:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.461 18:17:13 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:15.461 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.461 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.461 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.461 18:17:13 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:15.461 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.461 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.461 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.461 18:17:13 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.461 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.461 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.461 [2024-04-25 18:17:13.304794] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.461 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 [2024-04-25 18:17:13.312932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 null0 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 null1 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 null2 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 null3 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:15.462 18:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 18:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@47 -- # hostpid=85606 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@48 -- # waitforlisten 85606 /tmp/host.sock 00:22:15.462 18:17:13 -- common/autotest_common.sh@819 -- # '[' -z 85606 ']' 00:22:15.462 18:17:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:15.462 18:17:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.462 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:15.462 18:17:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:15.462 18:17:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.462 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:15.462 18:17:13 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:15.719 [2024-04-25 18:17:13.418817] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:15.719 [2024-04-25 18:17:13.418909] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85606 ] 00:22:15.719 [2024-04-25 18:17:13.560403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.977 [2024-04-25 18:17:13.663589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:15.977 [2024-04-25 18:17:13.663775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.543 18:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.543 18:17:14 -- common/autotest_common.sh@852 -- # return 0 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@57 -- # avahipid=85635 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:16.543 18:17:14 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:16.543 Process 1004 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:16.543 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:16.543 Successfully dropped root privileges. 00:22:16.543 avahi-daemon 0.8 starting up. 00:22:16.543 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:16.543 Successfully called chroot(). 00:22:16.543 Successfully dropped remaining capabilities. 00:22:16.543 No service file found in /etc/avahi/services. 00:22:17.480 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:17.480 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:17.480 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:17.480 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:17.480 Network interface enumeration completed. 00:22:17.480 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:22:17.480 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:17.480 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:22:17.480 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:17.740 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3440457466. 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # xargs 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # sort 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # sort 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # xargs 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # sort 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@68 -- # xargs 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # sort 00:22:17.740 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.740 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.740 18:17:15 -- host/mdns_discovery.sh@64 -- # xargs 00:22:17.740 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:17.999 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.999 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.999 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:17.999 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@68 -- # sort 00:22:17.999 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.999 18:17:15 -- host/mdns_discovery.sh@68 -- # xargs 00:22:17.999 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.999 [2024-04-25 18:17:15.769891] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@64 -- # sort 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@64 -- # xargs 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 [2024-04-25 18:17:15.837599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 [2024-04-25 18:17:15.877580] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:18.000 18:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.000 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.000 [2024-04-25 18:17:15.885595] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:18.000 18:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=85686 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:18.000 18:17:15 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:18.936 [2024-04-25 18:17:16.669891] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:19.195 Established under name 'CDC' 00:22:19.195 [2024-04-25 18:17:17.069917] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:19.195 [2024-04-25 18:17:17.069941] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:22:19.195 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:19.195 cookie is 0 00:22:19.195 is_local: 1 00:22:19.195 our_own: 0 00:22:19.195 wide_area: 0 00:22:19.195 multicast: 1 00:22:19.195 cached: 1 00:22:19.454 [2024-04-25 18:17:17.169913] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:19.454 [2024-04-25 18:17:17.169935] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:22:19.454 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:19.454 cookie is 0 00:22:19.454 is_local: 1 00:22:19.454 our_own: 0 00:22:19.454 wide_area: 0 00:22:19.454 multicast: 1 00:22:19.454 cached: 1 00:22:20.391 [2024-04-25 18:17:18.079454] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:20.391 [2024-04-25 18:17:18.079479] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:20.391 [2024-04-25 18:17:18.079513] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:20.391 [2024-04-25 18:17:18.165573] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:20.391 [2024-04-25 18:17:18.178843] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:20.391 [2024-04-25 18:17:18.178864] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:20.391 [2024-04-25 18:17:18.178894] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:20.391 [2024-04-25 18:17:18.227375] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:20.391 [2024-04-25 18:17:18.227401] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:20.391 [2024-04-25 18:17:18.264529] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:20.391 [2024-04-25 18:17:18.319272] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:20.391 [2024-04-25 18:17:18.319358] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:23.675 18:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.675 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@80 -- # sort 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@80 -- # xargs 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:23.675 18:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.675 18:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.675 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@76 -- # sort 00:22:23.675 18:17:20 -- host/mdns_discovery.sh@76 -- # xargs 00:22:23.675 18:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:23.675 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@68 -- # sort 00:22:23.675 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@68 -- # xargs 00:22:23.675 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.675 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:23.675 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@64 -- # sort 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@64 -- # xargs 00:22:23.675 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:23.675 18:17:21 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:23.676 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:23.676 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # xargs 00:22:23.676 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:23.676 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.676 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@72 -- # xargs 00:22:23.676 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:23.676 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.676 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.676 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:23.676 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.676 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.676 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:23.676 18:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.676 18:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.676 18:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.676 18:17:21 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@64 -- # sort 00:22:24.612 18:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@64 -- # xargs 00:22:24.612 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.612 18:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:24.612 18:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.612 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:24.612 18:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:24.612 18:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.612 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.612 [2024-04-25 18:17:22.436637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.612 [2024-04-25 18:17:22.437439] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:24.612 [2024-04-25 18:17:22.437479] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.612 [2024-04-25 18:17:22.437535] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:24.612 [2024-04-25 18:17:22.437550] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:24.612 18:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:24.612 18:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.612 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.612 [2024-04-25 18:17:22.444613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:24.612 [2024-04-25 18:17:22.445440] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:24.612 [2024-04-25 18:17:22.445498] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:24.612 18:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.612 18:17:22 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:24.871 [2024-04-25 18:17:22.576524] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:24.871 [2024-04-25 18:17:22.576695] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:24.871 [2024-04-25 18:17:22.633817] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:24.871 [2024-04-25 18:17:22.633841] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.871 [2024-04-25 18:17:22.633864] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:24.871 [2024-04-25 18:17:22.633880] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.871 [2024-04-25 18:17:22.633917] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:24.871 [2024-04-25 18:17:22.633925] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:24.871 [2024-04-25 18:17:22.633930] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:24.871 [2024-04-25 18:17:22.633942] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:24.871 [2024-04-25 18:17:22.679693] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.871 [2024-04-25 18:17:22.679714] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:24.871 [2024-04-25 18:17:22.679768] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:24.871 [2024-04-25 18:17:22.679776] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@68 -- # sort 00:22:25.805 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.805 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@68 -- # xargs 00:22:25.805 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@64 -- # sort 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@64 -- # xargs 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.805 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.805 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.805 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:25.805 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:25.805 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # xargs 00:22:25.805 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:25.805 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.805 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@72 -- # xargs 00:22:25.805 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:25.805 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.805 18:17:23 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:25.805 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.805 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.064 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.064 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.064 [2024-04-25 18:17:23.746272] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.064 [2024-04-25 18:17:23.746346] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.064 [2024-04-25 18:17:23.746382] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:26.064 [2024-04-25 18:17:23.746396] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:26.064 [2024-04-25 18:17:23.748514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.748552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.748564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.748573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.748583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.748591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.748600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.748625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.748633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.064 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:26.064 18:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.064 18:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.064 [2024-04-25 18:17:23.754297] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.064 [2024-04-25 18:17:23.754380] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:26.064 [2024-04-25 18:17:23.755911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.755955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.755983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.755992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.756000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.756017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-04-25 18:17:23.756024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-04-25 18:17:23.756033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.064 18:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.064 18:17:23 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:26.064 [2024-04-25 18:17:23.758479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.765882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.768496] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.064 [2024-04-25 18:17:23.768631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.768706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.768722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.064 [2024-04-25 18:17:23.768731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.064 [2024-04-25 18:17:23.768746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.768760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.064 [2024-04-25 18:17:23.768783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.064 [2024-04-25 18:17:23.768808] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.064 [2024-04-25 18:17:23.768838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-04-25 18:17:23.775891] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.064 [2024-04-25 18:17:23.776001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.776044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.776059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.064 [2024-04-25 18:17:23.776068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.064 [2024-04-25 18:17:23.776082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.776095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.064 [2024-04-25 18:17:23.776103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.064 [2024-04-25 18:17:23.776111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.064 [2024-04-25 18:17:23.776123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-04-25 18:17:23.778578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.064 [2024-04-25 18:17:23.778714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.778755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.778770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.064 [2024-04-25 18:17:23.778778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.064 [2024-04-25 18:17:23.778792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.778804] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.064 [2024-04-25 18:17:23.778811] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.064 [2024-04-25 18:17:23.778819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.064 [2024-04-25 18:17:23.778831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-04-25 18:17:23.785952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.064 [2024-04-25 18:17:23.786055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.786096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.786111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.064 [2024-04-25 18:17:23.786119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.064 [2024-04-25 18:17:23.786133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.786145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.064 [2024-04-25 18:17:23.786152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.064 [2024-04-25 18:17:23.786160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.064 [2024-04-25 18:17:23.786172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-04-25 18:17:23.788640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.064 [2024-04-25 18:17:23.788743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.788786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.788802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.064 [2024-04-25 18:17:23.788811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.064 [2024-04-25 18:17:23.788825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.064 [2024-04-25 18:17:23.788838] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.064 [2024-04-25 18:17:23.788845] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.064 [2024-04-25 18:17:23.788853] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.064 [2024-04-25 18:17:23.788866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-04-25 18:17:23.796013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.064 [2024-04-25 18:17:23.796117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.796159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-04-25 18:17:23.796173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.064 [2024-04-25 18:17:23.796182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.796196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.796208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.796216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.796224] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.796236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.798716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.798816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.798857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.798871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.798880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.798894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.798906] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.798913] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.798920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.798932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.806089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.806199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.806242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.806257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.806266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.806296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.806320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.806328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.806336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.806350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.808789] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.808891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.808932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.808947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.808956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.808970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.808982] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.808989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.808996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.809008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.816169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.816275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.816336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.816352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.816361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.816376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.816403] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.816412] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.816420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.816433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.818866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.818968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.819010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.819025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.819038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.819052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.819065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.819072] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.819079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.819092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.826246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.826374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.826417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.826432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.826440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.826454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.826488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.826497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.826504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.826517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.828926] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.829027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.829068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.829083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.829091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.829105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.829117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.829124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.829131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.829143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.836345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.836448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.836489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.836504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.836514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.836528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.836554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.836563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.836571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.836583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.838986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.839087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.839129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.839144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.839153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.839167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.839179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.839186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.839194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.839206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.846422] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.846519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.846563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.846578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.846587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.846602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.846651] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.846662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.846685] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.846714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.849044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.849146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.849188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.849229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.849256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.849271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.849284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.849292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.849300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.849326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.856487] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.856588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.856630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.856644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.856653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.856667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.856692] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.856701] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.856709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.856721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.859120] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.859222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.859265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.859296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.859317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.859332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.859346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.859353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.859361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.859374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.866546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.866649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.866691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.866706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.065 [2024-04-25 18:17:23.866714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.866728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.866755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.866764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.866771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.065 [2024-04-25 18:17:23.866784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.869193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.065 [2024-04-25 18:17:23.869317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.869363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.869379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.065 [2024-04-25 18:17:23.869388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.065 [2024-04-25 18:17:23.869403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.065 [2024-04-25 18:17:23.869417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.065 [2024-04-25 18:17:23.869425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.065 [2024-04-25 18:17:23.869433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.065 [2024-04-25 18:17:23.869447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.065 [2024-04-25 18:17:23.876623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.065 [2024-04-25 18:17:23.876746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.065 [2024-04-25 18:17:23.876792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.066 [2024-04-25 18:17:23.876807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.066 [2024-04-25 18:17:23.876817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.066 [2024-04-25 18:17:23.876831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.066 [2024-04-25 18:17:23.876861] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.066 [2024-04-25 18:17:23.876870] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.066 [2024-04-25 18:17:23.876878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.066 [2024-04-25 18:17:23.876891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.066 [2024-04-25 18:17:23.879293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.066 [2024-04-25 18:17:23.879406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.066 [2024-04-25 18:17:23.879451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.066 [2024-04-25 18:17:23.879467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14395e0 with addr=10.0.0.2, port=4420 00:22:26.066 [2024-04-25 18:17:23.879476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14395e0 is same with the state(5) to be set 00:22:26.066 [2024-04-25 18:17:23.879491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14395e0 (9): Bad file descriptor 00:22:26.066 [2024-04-25 18:17:23.879504] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.066 [2024-04-25 18:17:23.879512] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.066 [2024-04-25 18:17:23.879520] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.066 [2024-04-25 18:17:23.879534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.066 [2024-04-25 18:17:23.886701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:26.066 [2024-04-25 18:17:23.886807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.066 [2024-04-25 18:17:23.886849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.066 [2024-04-25 18:17:23.886864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0480 with addr=10.0.0.3, port=4420 00:22:26.066 [2024-04-25 18:17:23.886873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0480 is same with the state(5) to be set 00:22:26.066 [2024-04-25 18:17:23.886888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0480 (9): Bad file descriptor 00:22:26.066 [2024-04-25 18:17:23.886925] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:26.066 [2024-04-25 18:17:23.886943] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:26.066 [2024-04-25 18:17:23.886961] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.066 [2024-04-25 18:17:23.887026] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:26.066 [2024-04-25 18:17:23.887042] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:26.066 [2024-04-25 18:17:23.887056] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:26.066 [2024-04-25 18:17:23.887086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:26.066 [2024-04-25 18:17:23.887098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:26.066 [2024-04-25 18:17:23.887107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:26.066 [2024-04-25 18:17:23.887131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.066 [2024-04-25 18:17:23.973013] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:26.066 [2024-04-25 18:17:23.973080] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.998 18:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@68 -- # sort 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:26.998 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@68 -- # xargs 00:22:26.998 18:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.998 18:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:26.998 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@64 -- # sort 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@64 -- # xargs 00:22:26.998 18:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:26.998 18:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.998 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # xargs 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:26.998 18:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:26.998 18:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.998 18:17:24 -- host/mdns_discovery.sh@72 -- # xargs 00:22:26.998 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:27.258 18:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.258 18:17:24 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:27.258 18:17:24 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:27.258 18:17:24 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:27.258 18:17:24 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:27.258 18:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.258 18:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:27.258 18:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.258 18:17:25 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:27.258 18:17:25 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:27.258 18:17:25 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:27.258 18:17:25 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:27.258 18:17:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.258 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:22:27.258 18:17:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.258 18:17:25 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:27.258 [2024-04-25 18:17:25.069905] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:28.192 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@80 -- # sort 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:28.192 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@80 -- # xargs 00:22:28.192 18:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@68 -- # sort 00:22:28.192 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.192 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.192 18:17:26 -- host/mdns_discovery.sh@68 -- # xargs 00:22:28.192 18:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@64 -- # sort 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:28.451 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.451 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@64 -- # xargs 00:22:28.451 18:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:22:28.451 18:17:26 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:28.452 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:28.452 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.452 18:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:28.452 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.452 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.452 18:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:28.452 18:17:26 -- common/autotest_common.sh@640 -- # local es=0 00:22:28.452 18:17:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:28.452 18:17:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:28.452 18:17:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.452 18:17:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:28.452 18:17:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.452 18:17:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:28.452 18:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.452 18:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.452 [2024-04-25 18:17:26.284597] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:22:28.452 2024/04/25 18:17:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:28.452 request: 00:22:28.452 { 00:22:28.452 "method": "bdev_nvme_start_mdns_discovery", 00:22:28.452 "params": { 00:22:28.452 "name": "mdns", 00:22:28.452 "svcname": "_nvme-disc._http", 00:22:28.452 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:28.452 } 00:22:28.452 } 00:22:28.452 Got JSON-RPC error response 00:22:28.452 GoRPCClient: error on JSON-RPC call 00:22:28.452 18:17:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:28.452 18:17:26 -- common/autotest_common.sh@643 -- # es=1 00:22:28.452 18:17:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:28.452 18:17:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:28.452 18:17:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:28.452 18:17:26 -- host/mdns_discovery.sh@183 -- # sleep 5 00:22:29.018 [2024-04-25 18:17:26.673090] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:29.018 [2024-04-25 18:17:26.773087] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:29.018 [2024-04-25 18:17:26.873092] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:29.018 [2024-04-25 18:17:26.873111] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:22:29.018 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:29.018 cookie is 0 00:22:29.018 is_local: 1 00:22:29.018 our_own: 0 00:22:29.018 wide_area: 0 00:22:29.018 multicast: 1 00:22:29.018 cached: 1 00:22:29.277 [2024-04-25 18:17:26.973094] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:29.277 [2024-04-25 18:17:26.973117] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:22:29.277 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:29.277 cookie is 0 00:22:29.277 is_local: 1 00:22:29.277 our_own: 0 00:22:29.277 wide_area: 0 00:22:29.277 multicast: 1 00:22:29.277 cached: 1 00:22:30.212 [2024-04-25 18:17:27.880694] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:30.212 [2024-04-25 18:17:27.880719] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:30.212 [2024-04-25 18:17:27.880752] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:30.212 [2024-04-25 18:17:27.966797] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:22:30.212 [2024-04-25 18:17:27.980643] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:30.212 [2024-04-25 18:17:27.980664] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:30.212 [2024-04-25 18:17:27.980697] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.212 [2024-04-25 18:17:28.031574] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:30.212 [2024-04-25 18:17:28.031603] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:30.212 [2024-04-25 18:17:28.066740] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:22:30.212 [2024-04-25 18:17:28.125333] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:30.212 [2024-04-25 18:17:28.125361] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:33.495 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@80 -- # sort 00:22:33.495 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@80 -- # xargs 00:22:33.495 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@76 -- # sort 00:22:33.495 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.495 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.495 18:17:31 -- host/mdns_discovery.sh@76 -- # xargs 00:22:33.496 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:33.496 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@64 -- # sort 00:22:33.496 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.496 18:17:31 -- host/mdns_discovery.sh@64 -- # xargs 00:22:33.755 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:33.755 18:17:31 -- common/autotest_common.sh@640 -- # local es=0 00:22:33.755 18:17:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:33.755 18:17:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:33.755 18:17:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.755 18:17:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:33.755 18:17:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.755 18:17:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:33.755 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.755 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.755 [2024-04-25 18:17:31.480711] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:22:33.755 2024/04/25 18:17:31 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:33.755 request: 00:22:33.755 { 00:22:33.755 "method": "bdev_nvme_start_mdns_discovery", 00:22:33.755 "params": { 00:22:33.755 "name": "cdc", 00:22:33.755 "svcname": "_nvme-disc._tcp", 00:22:33.755 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:33.755 } 00:22:33.755 } 00:22:33.755 Got JSON-RPC error response 00:22:33.755 GoRPCClient: error on JSON-RPC call 00:22:33.755 18:17:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:33.755 18:17:31 -- common/autotest_common.sh@643 -- # es=1 00:22:33.755 18:17:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:33.755 18:17:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:33.755 18:17:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:33.755 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:33.755 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@76 -- # sort 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@76 -- # xargs 00:22:33.755 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@64 -- # xargs 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:33.755 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@64 -- # sort 00:22:33.755 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.755 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:33.755 18:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.755 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.755 18:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@197 -- # kill 85606 00:22:33.755 18:17:31 -- host/mdns_discovery.sh@200 -- # wait 85606 00:22:34.013 [2024-04-25 18:17:31.712681] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:34.013 18:17:31 -- host/mdns_discovery.sh@201 -- # kill 85686 00:22:34.013 Got SIGTERM, quitting. 00:22:34.013 18:17:31 -- host/mdns_discovery.sh@202 -- # kill 85635 00:22:34.013 18:17:31 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:22:34.013 18:17:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.013 18:17:31 -- nvmf/common.sh@116 -- # sync 00:22:34.013 Got SIGTERM, quitting. 00:22:34.013 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:34.013 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:34.013 avahi-daemon 0.8 exiting. 00:22:34.013 18:17:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.013 18:17:31 -- nvmf/common.sh@119 -- # set +e 00:22:34.013 18:17:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.013 18:17:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.013 rmmod nvme_tcp 00:22:34.013 rmmod nvme_fabrics 00:22:34.013 rmmod nvme_keyring 00:22:34.013 18:17:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:34.013 18:17:31 -- nvmf/common.sh@123 -- # set -e 00:22:34.013 18:17:31 -- nvmf/common.sh@124 -- # return 0 00:22:34.013 18:17:31 -- nvmf/common.sh@477 -- # '[' -n 85550 ']' 00:22:34.013 18:17:31 -- nvmf/common.sh@478 -- # killprocess 85550 00:22:34.013 18:17:31 -- common/autotest_common.sh@926 -- # '[' -z 85550 ']' 00:22:34.013 18:17:31 -- common/autotest_common.sh@930 -- # kill -0 85550 00:22:34.013 18:17:31 -- common/autotest_common.sh@931 -- # uname 00:22:34.013 18:17:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:34.013 18:17:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85550 00:22:34.013 killing process with pid 85550 00:22:34.013 18:17:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:34.013 18:17:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:34.014 18:17:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85550' 00:22:34.014 18:17:31 -- common/autotest_common.sh@945 -- # kill 85550 00:22:34.014 18:17:31 -- common/autotest_common.sh@950 -- # wait 85550 00:22:34.271 18:17:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:34.271 18:17:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:34.271 18:17:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:34.271 18:17:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.271 18:17:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:34.272 18:17:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.272 18:17:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.272 18:17:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.272 18:17:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:34.530 00:22:34.530 real 0m20.525s 00:22:34.530 user 0m40.271s 00:22:34.530 sys 0m1.966s 00:22:34.530 18:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.530 18:17:32 -- common/autotest_common.sh@10 -- # set +x 00:22:34.530 ************************************ 00:22:34.530 END TEST nvmf_mdns_discovery 00:22:34.530 ************************************ 00:22:34.530 18:17:32 -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:22:34.530 18:17:32 -- nvmf/nvmf.sh@115 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:34.530 18:17:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:34.530 18:17:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:34.530 18:17:32 -- common/autotest_common.sh@10 -- # set +x 00:22:34.530 ************************************ 00:22:34.530 START TEST nvmf_multipath 00:22:34.530 ************************************ 00:22:34.530 18:17:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:34.530 * Looking for test storage... 00:22:34.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:34.530 18:17:32 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.530 18:17:32 -- nvmf/common.sh@7 -- # uname -s 00:22:34.530 18:17:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.530 18:17:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.530 18:17:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.530 18:17:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.530 18:17:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.530 18:17:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.530 18:17:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.530 18:17:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.530 18:17:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.530 18:17:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.530 18:17:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:22:34.530 18:17:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:22:34.530 18:17:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.530 18:17:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.530 18:17:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.530 18:17:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.530 18:17:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.530 18:17:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.530 18:17:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.530 18:17:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.530 18:17:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.530 18:17:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.530 18:17:32 -- paths/export.sh@5 -- # export PATH 00:22:34.530 18:17:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.530 18:17:32 -- nvmf/common.sh@46 -- # : 0 00:22:34.530 18:17:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:34.530 18:17:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:34.530 18:17:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:34.530 18:17:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.530 18:17:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.530 18:17:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:34.530 18:17:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:34.530 18:17:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:34.530 18:17:32 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.530 18:17:32 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.530 18:17:32 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.530 18:17:32 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:34.530 18:17:32 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.530 18:17:32 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:34.530 18:17:32 -- host/multipath.sh@30 -- # nvmftestinit 00:22:34.530 18:17:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:34.530 18:17:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.530 18:17:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:34.530 18:17:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:34.530 18:17:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:34.530 18:17:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.530 18:17:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.530 18:17:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.530 18:17:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:34.531 18:17:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:34.531 18:17:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:34.531 18:17:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:34.531 18:17:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:34.531 18:17:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:34.531 18:17:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.531 18:17:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.531 18:17:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:34.531 18:17:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:34.531 18:17:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.531 18:17:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.531 18:17:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.531 18:17:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.531 18:17:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.531 18:17:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.531 18:17:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.531 18:17:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.531 18:17:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:34.531 18:17:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:34.531 Cannot find device "nvmf_tgt_br" 00:22:34.531 18:17:32 -- nvmf/common.sh@154 -- # true 00:22:34.531 18:17:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.531 Cannot find device "nvmf_tgt_br2" 00:22:34.531 18:17:32 -- nvmf/common.sh@155 -- # true 00:22:34.531 18:17:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:34.531 18:17:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:34.531 Cannot find device "nvmf_tgt_br" 00:22:34.531 18:17:32 -- nvmf/common.sh@157 -- # true 00:22:34.531 18:17:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:34.531 Cannot find device "nvmf_tgt_br2" 00:22:34.531 18:17:32 -- nvmf/common.sh@158 -- # true 00:22:34.531 18:17:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:34.789 18:17:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:34.789 18:17:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.789 18:17:32 -- nvmf/common.sh@161 -- # true 00:22:34.789 18:17:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.789 18:17:32 -- nvmf/common.sh@162 -- # true 00:22:34.789 18:17:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:34.789 18:17:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:34.789 18:17:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:34.789 18:17:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:34.789 18:17:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:34.789 18:17:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:34.789 18:17:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:34.789 18:17:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:34.789 18:17:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:34.789 18:17:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:34.789 18:17:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:34.789 18:17:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:34.789 18:17:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:34.789 18:17:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:34.789 18:17:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:34.789 18:17:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:34.789 18:17:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:34.789 18:17:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:34.789 18:17:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:34.789 18:17:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:34.789 18:17:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:34.789 18:17:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:34.789 18:17:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:34.789 18:17:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:34.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:22:34.789 00:22:34.789 --- 10.0.0.2 ping statistics --- 00:22:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.789 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:34.789 18:17:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:34.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:34.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:34.789 00:22:34.789 --- 10.0.0.3 ping statistics --- 00:22:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.789 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:34.789 18:17:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:34.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:34.789 00:22:34.789 --- 10.0.0.1 ping statistics --- 00:22:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.789 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:34.789 18:17:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.789 18:17:32 -- nvmf/common.sh@421 -- # return 0 00:22:34.789 18:17:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:34.789 18:17:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.789 18:17:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:34.789 18:17:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:34.789 18:17:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.789 18:17:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:34.789 18:17:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:34.789 18:17:32 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:34.789 18:17:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:34.789 18:17:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:34.789 18:17:32 -- common/autotest_common.sh@10 -- # set +x 00:22:34.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.789 18:17:32 -- nvmf/common.sh@469 -- # nvmfpid=86201 00:22:34.789 18:17:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:34.789 18:17:32 -- nvmf/common.sh@470 -- # waitforlisten 86201 00:22:34.789 18:17:32 -- common/autotest_common.sh@819 -- # '[' -z 86201 ']' 00:22:34.789 18:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.789 18:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:34.789 18:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.789 18:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:34.789 18:17:32 -- common/autotest_common.sh@10 -- # set +x 00:22:35.048 [2024-04-25 18:17:32.753948] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:35.048 [2024-04-25 18:17:32.754027] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.048 [2024-04-25 18:17:32.886064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:35.048 [2024-04-25 18:17:32.970485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.048 [2024-04-25 18:17:32.970867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.048 [2024-04-25 18:17:32.970920] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.048 [2024-04-25 18:17:32.971057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.048 [2024-04-25 18:17:32.971325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.048 [2024-04-25 18:17:32.971330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.982 18:17:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.982 18:17:33 -- common/autotest_common.sh@852 -- # return 0 00:22:35.982 18:17:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:35.982 18:17:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:35.982 18:17:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.982 18:17:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.982 18:17:33 -- host/multipath.sh@33 -- # nvmfapp_pid=86201 00:22:35.982 18:17:33 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:36.240 [2024-04-25 18:17:34.030415] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.240 18:17:34 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:36.512 Malloc0 00:22:36.512 18:17:34 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:36.783 18:17:34 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.783 18:17:34 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.042 [2024-04-25 18:17:34.939666] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.042 18:17:34 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.300 [2024-04-25 18:17:35.139775] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.300 18:17:35 -- host/multipath.sh@44 -- # bdevperf_pid=86299 00:22:37.300 18:17:35 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:37.300 18:17:35 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.300 18:17:35 -- host/multipath.sh@47 -- # waitforlisten 86299 /var/tmp/bdevperf.sock 00:22:37.300 18:17:35 -- common/autotest_common.sh@819 -- # '[' -z 86299 ']' 00:22:37.300 18:17:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.300 18:17:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.300 18:17:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.300 18:17:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.301 18:17:35 -- common/autotest_common.sh@10 -- # set +x 00:22:38.237 18:17:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.237 18:17:36 -- common/autotest_common.sh@852 -- # return 0 00:22:38.237 18:17:36 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:38.495 18:17:36 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:38.755 Nvme0n1 00:22:38.755 18:17:36 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:39.014 Nvme0n1 00:22:39.014 18:17:36 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:39.014 18:17:36 -- host/multipath.sh@78 -- # sleep 1 00:22:40.390 18:17:37 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:40.390 18:17:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:40.390 18:17:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:40.649 18:17:38 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:40.649 18:17:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:40.649 18:17:38 -- host/multipath.sh@65 -- # dtrace_pid=86381 00:22:40.649 18:17:38 -- host/multipath.sh@66 -- # sleep 6 00:22:47.209 18:17:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:47.209 18:17:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:47.209 18:17:44 -- host/multipath.sh@67 -- # active_port=4421 00:22:47.209 18:17:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:47.209 Attaching 4 probes... 00:22:47.209 @path[10.0.0.2, 4421]: 21300 00:22:47.209 @path[10.0.0.2, 4421]: 21857 00:22:47.209 @path[10.0.0.2, 4421]: 22178 00:22:47.209 @path[10.0.0.2, 4421]: 21895 00:22:47.209 @path[10.0.0.2, 4421]: 22103 00:22:47.209 18:17:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:47.209 18:17:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:47.209 18:17:44 -- host/multipath.sh@69 -- # sed -n 1p 00:22:47.209 18:17:44 -- host/multipath.sh@69 -- # port=4421 00:22:47.209 18:17:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:47.209 18:17:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:47.209 18:17:44 -- host/multipath.sh@72 -- # kill 86381 00:22:47.209 18:17:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:47.209 18:17:44 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:47.209 18:17:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:47.209 18:17:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:47.209 18:17:45 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:47.209 18:17:45 -- host/multipath.sh@65 -- # dtrace_pid=86518 00:22:47.209 18:17:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:47.209 18:17:45 -- host/multipath.sh@66 -- # sleep 6 00:22:53.845 18:17:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:53.845 18:17:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:53.846 18:17:51 -- host/multipath.sh@67 -- # active_port=4420 00:22:53.846 18:17:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:53.846 Attaching 4 probes... 00:22:53.846 @path[10.0.0.2, 4420]: 21698 00:22:53.846 @path[10.0.0.2, 4420]: 21727 00:22:53.846 @path[10.0.0.2, 4420]: 21896 00:22:53.846 @path[10.0.0.2, 4420]: 22394 00:22:53.846 @path[10.0.0.2, 4420]: 22333 00:22:53.846 18:17:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:53.846 18:17:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:53.846 18:17:51 -- host/multipath.sh@69 -- # sed -n 1p 00:22:53.846 18:17:51 -- host/multipath.sh@69 -- # port=4420 00:22:53.846 18:17:51 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:53.846 18:17:51 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:53.846 18:17:51 -- host/multipath.sh@72 -- # kill 86518 00:22:53.846 18:17:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:53.846 18:17:51 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:53.846 18:17:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:53.846 18:17:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:54.104 18:17:51 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:54.104 18:17:51 -- host/multipath.sh@65 -- # dtrace_pid=86647 00:22:54.104 18:17:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:54.104 18:17:51 -- host/multipath.sh@66 -- # sleep 6 00:23:00.666 18:17:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:00.666 18:17:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:00.666 18:17:58 -- host/multipath.sh@67 -- # active_port=4421 00:23:00.666 18:17:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.666 Attaching 4 probes... 00:23:00.666 @path[10.0.0.2, 4421]: 15505 00:23:00.666 @path[10.0.0.2, 4421]: 22227 00:23:00.666 @path[10.0.0.2, 4421]: 22399 00:23:00.666 @path[10.0.0.2, 4421]: 22157 00:23:00.666 @path[10.0.0.2, 4421]: 22178 00:23:00.666 18:17:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:00.666 18:17:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:00.666 18:17:58 -- host/multipath.sh@69 -- # sed -n 1p 00:23:00.666 18:17:58 -- host/multipath.sh@69 -- # port=4421 00:23:00.666 18:17:58 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.666 18:17:58 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.666 18:17:58 -- host/multipath.sh@72 -- # kill 86647 00:23:00.666 18:17:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.666 18:17:58 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:00.666 18:17:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:00.666 18:17:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:00.925 18:17:58 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:00.925 18:17:58 -- host/multipath.sh@65 -- # dtrace_pid=86779 00:23:00.925 18:17:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:00.925 18:17:58 -- host/multipath.sh@66 -- # sleep 6 00:23:07.492 18:18:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:07.492 18:18:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:07.492 18:18:04 -- host/multipath.sh@67 -- # active_port= 00:23:07.492 18:18:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.492 Attaching 4 probes... 00:23:07.492 00:23:07.492 00:23:07.492 00:23:07.492 00:23:07.492 00:23:07.492 18:18:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:07.492 18:18:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:07.492 18:18:04 -- host/multipath.sh@69 -- # sed -n 1p 00:23:07.492 18:18:04 -- host/multipath.sh@69 -- # port= 00:23:07.492 18:18:04 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:07.492 18:18:04 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:07.492 18:18:04 -- host/multipath.sh@72 -- # kill 86779 00:23:07.492 18:18:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.492 18:18:04 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:07.492 18:18:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.492 18:18:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:07.492 18:18:05 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:07.492 18:18:05 -- host/multipath.sh@65 -- # dtrace_pid=86914 00:23:07.492 18:18:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:07.492 18:18:05 -- host/multipath.sh@66 -- # sleep 6 00:23:14.055 18:18:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:14.055 18:18:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:14.055 18:18:11 -- host/multipath.sh@67 -- # active_port=4421 00:23:14.055 18:18:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.055 Attaching 4 probes... 00:23:14.055 @path[10.0.0.2, 4421]: 21366 00:23:14.055 @path[10.0.0.2, 4421]: 21840 00:23:14.055 @path[10.0.0.2, 4421]: 21812 00:23:14.055 @path[10.0.0.2, 4421]: 21805 00:23:14.055 @path[10.0.0.2, 4421]: 21673 00:23:14.055 18:18:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:14.055 18:18:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:14.055 18:18:11 -- host/multipath.sh@69 -- # sed -n 1p 00:23:14.055 18:18:11 -- host/multipath.sh@69 -- # port=4421 00:23:14.055 18:18:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:14.055 18:18:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:14.055 18:18:11 -- host/multipath.sh@72 -- # kill 86914 00:23:14.055 18:18:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.055 18:18:11 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.055 [2024-04-25 18:18:11.813818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.813994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.055 [2024-04-25 18:18:11.814189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 [2024-04-25 18:18:11.814405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14610b0 is same with the state(5) to be set 00:23:14.056 18:18:11 -- host/multipath.sh@101 -- # sleep 1 00:23:14.991 18:18:12 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:14.991 18:18:12 -- host/multipath.sh@65 -- # dtrace_pid=87044 00:23:14.991 18:18:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:14.991 18:18:12 -- host/multipath.sh@66 -- # sleep 6 00:23:21.551 18:18:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:21.551 18:18:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:21.551 18:18:19 -- host/multipath.sh@67 -- # active_port=4420 00:23:21.551 18:18:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.551 Attaching 4 probes... 00:23:21.551 @path[10.0.0.2, 4420]: 21036 00:23:21.551 @path[10.0.0.2, 4420]: 20919 00:23:21.551 @path[10.0.0.2, 4420]: 21027 00:23:21.551 @path[10.0.0.2, 4420]: 21238 00:23:21.551 @path[10.0.0.2, 4420]: 20927 00:23:21.551 18:18:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:21.551 18:18:19 -- host/multipath.sh@69 -- # sed -n 1p 00:23:21.551 18:18:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:21.551 18:18:19 -- host/multipath.sh@69 -- # port=4420 00:23:21.551 18:18:19 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:21.551 18:18:19 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:21.551 18:18:19 -- host/multipath.sh@72 -- # kill 87044 00:23:21.551 18:18:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.551 18:18:19 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.551 [2024-04-25 18:18:19.324562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.551 18:18:19 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:21.811 18:18:19 -- host/multipath.sh@111 -- # sleep 6 00:23:28.376 18:18:25 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:28.376 18:18:25 -- host/multipath.sh@65 -- # dtrace_pid=87232 00:23:28.376 18:18:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86201 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:28.376 18:18:25 -- host/multipath.sh@66 -- # sleep 6 00:23:34.963 18:18:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:34.963 18:18:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:34.963 18:18:31 -- host/multipath.sh@67 -- # active_port=4421 00:23:34.963 18:18:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.963 Attaching 4 probes... 00:23:34.963 @path[10.0.0.2, 4421]: 20573 00:23:34.963 @path[10.0.0.2, 4421]: 20854 00:23:34.963 @path[10.0.0.2, 4421]: 20933 00:23:34.963 @path[10.0.0.2, 4421]: 20781 00:23:34.963 @path[10.0.0.2, 4421]: 20937 00:23:34.963 18:18:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:34.963 18:18:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:34.963 18:18:31 -- host/multipath.sh@69 -- # sed -n 1p 00:23:34.963 18:18:31 -- host/multipath.sh@69 -- # port=4421 00:23:34.963 18:18:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.963 18:18:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.963 18:18:31 -- host/multipath.sh@72 -- # kill 87232 00:23:34.963 18:18:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.963 18:18:31 -- host/multipath.sh@114 -- # killprocess 86299 00:23:34.963 18:18:31 -- common/autotest_common.sh@926 -- # '[' -z 86299 ']' 00:23:34.963 18:18:31 -- common/autotest_common.sh@930 -- # kill -0 86299 00:23:34.963 18:18:31 -- common/autotest_common.sh@931 -- # uname 00:23:34.963 18:18:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:34.963 18:18:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86299 00:23:34.963 18:18:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:34.963 18:18:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:34.963 killing process with pid 86299 00:23:34.963 18:18:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86299' 00:23:34.963 18:18:31 -- common/autotest_common.sh@945 -- # kill 86299 00:23:34.963 18:18:31 -- common/autotest_common.sh@950 -- # wait 86299 00:23:34.963 Connection closed with partial response: 00:23:34.963 00:23:34.963 00:23:34.963 18:18:32 -- host/multipath.sh@116 -- # wait 86299 00:23:34.963 18:18:32 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:34.963 [2024-04-25 18:17:35.195821] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:34.963 [2024-04-25 18:17:35.195927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86299 ] 00:23:34.963 [2024-04-25 18:17:35.332201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.963 [2024-04-25 18:17:35.430080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.963 Running I/O for 90 seconds... 00:23:34.963 [2024-04-25 18:17:45.012988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.963 [2024-04-25 18:17:45.013047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.963 [2024-04-25 18:17:45.013110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.963 [2024-04-25 18:17:45.013361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.963 [2024-04-25 18:17:45.013492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.963 [2024-04-25 18:17:45.013533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.963 [2024-04-25 18:17:45.013637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.963 [2024-04-25 18:17:45.013650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.013984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.014229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.014417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.014903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.014922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.014935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.015461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.015499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.015528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.964 [2024-04-25 18:17:45.015544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.015610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.015625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.964 [2024-04-25 18:17:45.015648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.964 [2024-04-25 18:17:45.015670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.015737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.015804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.015852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.015885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.015918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.015953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.015972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.015993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.016888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.016988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.965 [2024-04-25 18:17:45.017025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.017045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.965 [2024-04-25 18:17:45.017059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.965 [2024-04-25 18:17:45.017079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.017917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.017969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.017983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.018391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.018410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.966 [2024-04-25 18:17:45.018424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.019217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.019243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.019267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.019311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.966 [2024-04-25 18:17:45.019336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.966 [2024-04-25 18:17:45.019350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.525682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.525811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.525887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.525925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.525962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.525983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.525999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.526911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.526948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.526971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.967 [2024-04-25 18:17:51.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.967 [2024-04-25 18:17:51.527040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.967 [2024-04-25 18:17:51.527057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.527135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.527251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.527902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.527939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.527960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.527983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.528022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.968 [2024-04-25 18:17:51.528061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.968 [2024-04-25 18:17:51.528399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.968 [2024-04-25 18:17:51.528415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.528437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.528453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.528501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.528523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.528539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.529946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.529966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.529981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.530137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.969 [2024-04-25 18:17:51.530174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.969 [2024-04-25 18:17:51.530530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.969 [2024-04-25 18:17:51.530553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.530604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.530746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.530817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.530965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.530987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.970 [2024-04-25 18:17:51.531916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.531980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.531996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.532031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.532052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.970 [2024-04-25 18:17:51.532067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.970 [2024-04-25 18:17:51.532087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.532102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.532137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.532838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.532883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.532926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.532963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.532983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.532998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.533680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.533977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.533992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.971 [2024-04-25 18:17:51.534292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.971 [2024-04-25 18:17:51.534328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.971 [2024-04-25 18:17:51.534345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.534382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.534418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.534827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.534848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.534869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.535647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.535830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.535868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.535905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.535943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.535966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.544627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.972 [2024-04-25 18:17:51.544709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.544750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.544828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.544865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.972 [2024-04-25 18:17:51.544887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.972 [2024-04-25 18:17:51.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.544945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.544961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.544983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.544999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.545933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.545971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.545992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.546007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.546262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.973 [2024-04-25 18:17:51.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.546387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.546434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.973 [2024-04-25 18:17:51.546455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.973 [2024-04-25 18:17:51.546470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.546615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.546834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.546918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.546954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.546991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.547028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.547064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.547101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.547138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.547174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.547196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.547212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.974 [2024-04-25 18:17:51.548815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.974 [2024-04-25 18:17:51.548836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.974 [2024-04-25 18:17:51.548851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.548872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.548888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.548909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.548924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.548945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.548961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.548982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.548997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.549653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.549748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.549788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.549966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.549982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.550200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.550283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.550320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.975 [2024-04-25 18:17:51.550373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.975 [2024-04-25 18:17:51.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.975 [2024-04-25 18:17:51.550468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.550521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.550556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.550593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.550629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.550675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.550706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.550724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.551364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.551709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.551872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.551909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.551966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.551982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.552301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.552340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.552413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.552487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.976 [2024-04-25 18:17:51.552523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.976 [2024-04-25 18:17:51.552544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.976 [2024-04-25 18:17:51.552560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.552828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.552901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.552946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.552968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.552983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.977 [2024-04-25 18:17:51.553691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.553889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.553906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.554528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.554555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.554582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.977 [2024-04-25 18:17:51.554601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.977 [2024-04-25 18:17:51.554624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.554713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.554786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.554978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.554994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.555104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.555178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.555251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.555799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.555820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.563678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.563764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.563795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.563829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.563852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.563906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.563937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.978 [2024-04-25 18:17:51.563960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.563991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.564013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.564046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.564068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.978 [2024-04-25 18:17:51.564100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.978 [2024-04-25 18:17:51.564123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.564177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.564967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.564999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.565707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.565792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.565814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.566780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.566820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.566862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.566888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.566920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.566974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.566996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.567415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.979 [2024-04-25 18:17:51.567574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.979 [2024-04-25 18:17:51.567626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.979 [2024-04-25 18:17:51.567658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.567679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.567732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.567784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.567849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.567905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.567989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.568944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.568975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.569105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.980 [2024-04-25 18:17:51.569534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.980 [2024-04-25 18:17:51.569777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.980 [2024-04-25 18:17:51.569799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.569830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.569851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.569894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.569917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.569950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.570025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.570078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.570183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.570283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.571931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.571963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.572091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.572210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.572339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.981 [2024-04-25 18:17:51.572500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.981 [2024-04-25 18:17:51.572587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.981 [2024-04-25 18:17:51.572609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.572977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.572999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.573467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.573520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.573587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.573960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.573991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.982 [2024-04-25 18:17:51.574680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.982 [2024-04-25 18:17:51.574733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.982 [2024-04-25 18:17:51.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.574788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.575690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.575727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.575808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.575842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.575865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.575919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.575950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.575972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.576366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.576650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.576947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.576969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.577195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.577297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.577408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.577512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.983 [2024-04-25 18:17:51.577588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.983 [2024-04-25 18:17:51.577610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.983 [2024-04-25 18:17:51.577626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.577911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.577969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.577984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.578676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.578733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.578759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.579221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.579262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.984 [2024-04-25 18:17:51.579364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.984 [2024-04-25 18:17:51.579390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.984 [2024-04-25 18:17:51.579405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.579910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.579976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.580039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.580767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.580808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.580849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.985 [2024-04-25 18:17:51.580889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.580971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.580996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.985 [2024-04-25 18:17:51.581011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:34.985 [2024-04-25 18:17:51.581036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.986 [2024-04-25 18:17:51.581359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.986 [2024-04-25 18:17:51.581443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.986 [2024-04-25 18:17:51.581491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.986 [2024-04-25 18:17:51.581550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.986 [2024-04-25 18:17:51.581593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.986 [2024-04-25 18:17:51.581620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:51.581637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:51.581673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:51.581692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:51.581719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:51.581735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:51.581762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:51.581778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:51.581806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:51.581822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:51.582009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:51.582034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.606939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.607606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.607778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.607794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.608565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.608627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.608718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.608757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.987 [2024-04-25 18:17:58.608796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.608975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.608990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.609014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.609032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.609056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.609072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.609094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.609110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:34.987 [2024-04-25 18:17:58.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.987 [2024-04-25 18:17:58.609160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.609875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.609978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.609993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.610823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.610949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.610975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.988 [2024-04-25 18:17:58.611001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.611029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.611045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.988 [2024-04-25 18:17:58.611071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.988 [2024-04-25 18:17:58.611087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.611128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.611169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.611790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.611915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.611956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.611982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.611998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:17:58.612560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:17:58.612588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.989 [2024-04-25 18:17:58.612604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:18:11.815035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:18:11.815130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:18:11.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:18:11.815182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:18:11.815200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:18:11.815215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:18:11.815249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.989 [2024-04-25 18:18:11.815263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.989 [2024-04-25 18:18:11.815278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.990 [2024-04-25 18:18:11.815861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.815983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.815999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.990 [2024-04-25 18:18:11.816392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.990 [2024-04-25 18:18:11.816421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.990 [2024-04-25 18:18:11.816482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.990 [2024-04-25 18:18:11.816511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.990 [2024-04-25 18:18:11.816525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.990 [2024-04-25 18:18:11.816539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.816625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.816653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.816746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.816919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.816948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.816976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.816990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.991 [2024-04-25 18:18:11.817571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.991 [2024-04-25 18:18:11.817621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.991 [2024-04-25 18:18:11.817677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.817744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.817985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.817999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.992 [2024-04-25 18:18:11.818841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.992 [2024-04-25 18:18:11.818899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.992 [2024-04-25 18:18:11.818914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.993 [2024-04-25 18:18:11.818928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.818957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.818972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.993 [2024-04-25 18:18:11.818986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.993 [2024-04-25 18:18:11.819191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193bbd0 is same with the state(5) to be set 00:23:34.993 [2024-04-25 18:18:11.819224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:34.993 [2024-04-25 18:18:11.819241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:34.993 [2024-04-25 18:18:11.819253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109680 len:8 PRP1 0x0 PRP2 0x0 00:23:34.993 [2024-04-25 18:18:11.819266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.993 [2024-04-25 18:18:11.819354] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x193bbd0 was disconnected and freed. reset controller. 00:23:34.993 [2024-04-25 18:18:11.820540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.993 [2024-04-25 18:18:11.820637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad66e0 (9): Bad file descriptor 00:23:34.993 [2024-04-25 18:18:11.820796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.993 [2024-04-25 18:18:11.820857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.993 [2024-04-25 18:18:11.820883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad66e0 with addr=10.0.0.2, port=4421 00:23:34.993 [2024-04-25 18:18:11.820900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad66e0 is same with the state(5) to be set 00:23:34.993 [2024-04-25 18:18:11.820926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad66e0 (9): Bad file descriptor 00:23:34.993 [2024-04-25 18:18:11.820951] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.993 [2024-04-25 18:18:11.820967] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.993 [2024-04-25 18:18:11.820984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.993 [2024-04-25 18:18:11.821010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.993 [2024-04-25 18:18:11.821026] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.993 [2024-04-25 18:18:21.879170] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:34.993 Received shutdown signal, test time was about 54.938439 seconds 00:23:34.993 00:23:34.993 Latency(us) 00:23:34.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.993 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.993 Verification LBA range: start 0x0 length 0x4000 00:23:34.993 Nvme0n1 : 54.94 12210.06 47.70 0.00 0.00 10466.22 1251.14 7015926.69 00:23:34.993 =================================================================================================================== 00:23:34.993 Total : 12210.06 47.70 0.00 0.00 10466.22 1251.14 7015926.69 00:23:34.993 18:18:32 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.993 18:18:32 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:34.993 18:18:32 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:34.993 18:18:32 -- host/multipath.sh@125 -- # nvmftestfini 00:23:34.993 18:18:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:34.993 18:18:32 -- nvmf/common.sh@116 -- # sync 00:23:34.993 18:18:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:34.993 18:18:32 -- nvmf/common.sh@119 -- # set +e 00:23:34.993 18:18:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:34.993 18:18:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:34.993 rmmod nvme_tcp 00:23:34.993 rmmod nvme_fabrics 00:23:34.993 rmmod nvme_keyring 00:23:34.993 18:18:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:34.993 18:18:32 -- nvmf/common.sh@123 -- # set -e 00:23:34.993 18:18:32 -- nvmf/common.sh@124 -- # return 0 00:23:34.993 18:18:32 -- nvmf/common.sh@477 -- # '[' -n 86201 ']' 00:23:34.993 18:18:32 -- nvmf/common.sh@478 -- # killprocess 86201 00:23:34.993 18:18:32 -- common/autotest_common.sh@926 -- # '[' -z 86201 ']' 00:23:34.993 18:18:32 -- common/autotest_common.sh@930 -- # kill -0 86201 00:23:34.993 18:18:32 -- common/autotest_common.sh@931 -- # uname 00:23:34.993 18:18:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:34.993 18:18:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86201 00:23:34.993 18:18:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:34.993 18:18:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:34.993 killing process with pid 86201 00:23:34.993 18:18:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86201' 00:23:34.993 18:18:32 -- common/autotest_common.sh@945 -- # kill 86201 00:23:34.993 18:18:32 -- common/autotest_common.sh@950 -- # wait 86201 00:23:34.993 18:18:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:34.993 18:18:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:34.993 18:18:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:34.993 18:18:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.993 18:18:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:34.993 18:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.993 18:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.993 18:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.993 18:18:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:34.993 00:23:34.993 real 1m0.532s 00:23:34.993 user 2m48.747s 00:23:34.993 sys 0m14.791s 00:23:34.993 18:18:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.993 18:18:32 -- common/autotest_common.sh@10 -- # set +x 00:23:34.993 ************************************ 00:23:34.993 END TEST nvmf_multipath 00:23:34.993 ************************************ 00:23:34.993 18:18:32 -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:34.993 18:18:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:34.993 18:18:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:34.993 18:18:32 -- common/autotest_common.sh@10 -- # set +x 00:23:34.993 ************************************ 00:23:34.993 START TEST nvmf_timeout 00:23:34.993 ************************************ 00:23:34.993 18:18:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:35.253 * Looking for test storage... 00:23:35.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:35.253 18:18:32 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:35.253 18:18:32 -- nvmf/common.sh@7 -- # uname -s 00:23:35.253 18:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.253 18:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.253 18:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.253 18:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.253 18:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.253 18:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.253 18:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.253 18:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.253 18:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.253 18:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.253 18:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:23:35.253 18:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:23:35.253 18:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.253 18:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.253 18:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:35.253 18:18:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:35.253 18:18:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.253 18:18:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.253 18:18:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.253 18:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.253 18:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.253 18:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.253 18:18:32 -- paths/export.sh@5 -- # export PATH 00:23:35.253 18:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.253 18:18:32 -- nvmf/common.sh@46 -- # : 0 00:23:35.253 18:18:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:35.253 18:18:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:35.253 18:18:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:35.253 18:18:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.253 18:18:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.253 18:18:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:35.253 18:18:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:35.253 18:18:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:35.253 18:18:32 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.253 18:18:32 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:35.253 18:18:32 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.253 18:18:32 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:35.253 18:18:32 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.253 18:18:32 -- host/timeout.sh@19 -- # nvmftestinit 00:23:35.254 18:18:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:35.254 18:18:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.254 18:18:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:35.254 18:18:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:35.254 18:18:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:35.254 18:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.254 18:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.254 18:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.254 18:18:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:35.254 18:18:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:35.254 18:18:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:35.254 18:18:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:35.254 18:18:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:35.254 18:18:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:35.254 18:18:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.254 18:18:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.254 18:18:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:35.254 18:18:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:35.254 18:18:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:35.254 18:18:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:35.254 18:18:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:35.254 18:18:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.254 18:18:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:35.254 18:18:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:35.254 18:18:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:35.254 18:18:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:35.254 18:18:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:35.254 18:18:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:35.254 Cannot find device "nvmf_tgt_br" 00:23:35.254 18:18:32 -- nvmf/common.sh@154 -- # true 00:23:35.254 18:18:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:35.254 Cannot find device "nvmf_tgt_br2" 00:23:35.254 18:18:32 -- nvmf/common.sh@155 -- # true 00:23:35.254 18:18:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:35.254 18:18:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:35.254 Cannot find device "nvmf_tgt_br" 00:23:35.254 18:18:32 -- nvmf/common.sh@157 -- # true 00:23:35.254 18:18:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:35.254 Cannot find device "nvmf_tgt_br2" 00:23:35.254 18:18:33 -- nvmf/common.sh@158 -- # true 00:23:35.254 18:18:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:35.254 18:18:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:35.254 18:18:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:35.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.254 18:18:33 -- nvmf/common.sh@161 -- # true 00:23:35.254 18:18:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:35.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.254 18:18:33 -- nvmf/common.sh@162 -- # true 00:23:35.254 18:18:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:35.254 18:18:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:35.254 18:18:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:35.254 18:18:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:35.254 18:18:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:35.254 18:18:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:35.254 18:18:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:35.254 18:18:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:35.254 18:18:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:35.254 18:18:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:35.254 18:18:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:35.254 18:18:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:35.254 18:18:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:35.254 18:18:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:35.254 18:18:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:35.254 18:18:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:35.254 18:18:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:35.511 18:18:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:35.511 18:18:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:35.511 18:18:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:35.511 18:18:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:35.511 18:18:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:35.511 18:18:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:35.511 18:18:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:35.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:23:35.511 00:23:35.511 --- 10.0.0.2 ping statistics --- 00:23:35.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.511 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:35.511 18:18:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:35.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:35.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:35.511 00:23:35.511 --- 10.0.0.3 ping statistics --- 00:23:35.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.511 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:35.511 18:18:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:35.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:35.511 00:23:35.511 --- 10.0.0.1 ping statistics --- 00:23:35.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.512 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:35.512 18:18:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.512 18:18:33 -- nvmf/common.sh@421 -- # return 0 00:23:35.512 18:18:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:35.512 18:18:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.512 18:18:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:35.512 18:18:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:35.512 18:18:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.512 18:18:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:35.512 18:18:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:35.512 18:18:33 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:35.512 18:18:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:35.512 18:18:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:35.512 18:18:33 -- common/autotest_common.sh@10 -- # set +x 00:23:35.512 18:18:33 -- nvmf/common.sh@469 -- # nvmfpid=87550 00:23:35.512 18:18:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:35.512 18:18:33 -- nvmf/common.sh@470 -- # waitforlisten 87550 00:23:35.512 18:18:33 -- common/autotest_common.sh@819 -- # '[' -z 87550 ']' 00:23:35.512 18:18:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.512 18:18:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:35.512 18:18:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.512 18:18:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:35.512 18:18:33 -- common/autotest_common.sh@10 -- # set +x 00:23:35.512 [2024-04-25 18:18:33.346222] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:35.512 [2024-04-25 18:18:33.346324] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.770 [2024-04-25 18:18:33.482974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:35.770 [2024-04-25 18:18:33.574472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:35.770 [2024-04-25 18:18:33.574868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.770 [2024-04-25 18:18:33.574889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.770 [2024-04-25 18:18:33.574898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.770 [2024-04-25 18:18:33.575061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.770 [2024-04-25 18:18:33.575068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.705 18:18:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:36.705 18:18:34 -- common/autotest_common.sh@852 -- # return 0 00:23:36.705 18:18:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:36.705 18:18:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:36.705 18:18:34 -- common/autotest_common.sh@10 -- # set +x 00:23:36.705 18:18:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.705 18:18:34 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.705 18:18:34 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:36.705 [2024-04-25 18:18:34.599166] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.705 18:18:34 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:36.964 Malloc0 00:23:37.222 18:18:34 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.222 18:18:35 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.481 18:18:35 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.740 [2024-04-25 18:18:35.558205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.740 18:18:35 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:37.740 18:18:35 -- host/timeout.sh@32 -- # bdevperf_pid=87641 00:23:37.740 18:18:35 -- host/timeout.sh@34 -- # waitforlisten 87641 /var/tmp/bdevperf.sock 00:23:37.740 18:18:35 -- common/autotest_common.sh@819 -- # '[' -z 87641 ']' 00:23:37.740 18:18:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.740 18:18:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:37.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.740 18:18:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.740 18:18:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:37.740 18:18:35 -- common/autotest_common.sh@10 -- # set +x 00:23:37.740 [2024-04-25 18:18:35.616917] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:37.740 [2024-04-25 18:18:35.617013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87641 ] 00:23:37.998 [2024-04-25 18:18:35.753001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.998 [2024-04-25 18:18:35.859068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.934 18:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:38.934 18:18:36 -- common/autotest_common.sh@852 -- # return 0 00:23:38.934 18:18:36 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:38.935 18:18:36 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:39.193 NVMe0n1 00:23:39.193 18:18:37 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.193 18:18:37 -- host/timeout.sh@51 -- # rpc_pid=87689 00:23:39.193 18:18:37 -- host/timeout.sh@53 -- # sleep 1 00:23:39.452 Running I/O for 10 seconds... 00:23:40.390 18:18:38 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.391 [2024-04-25 18:18:38.293565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.293999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1470e20 is same with the state(5) to be set 00:23:40.391 [2024-04-25 18:18:38.294859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.391 [2024-04-25 18:18:38.294909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.391 [2024-04-25 18:18:38.294932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.391 [2024-04-25 18:18:38.294943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.391 [2024-04-25 18:18:38.294954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.391 [2024-04-25 18:18:38.294962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.391 [2024-04-25 18:18:38.294973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.391 [2024-04-25 18:18:38.294982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.294992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.295804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.295814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.296847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.296856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.297829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.297838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.298191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.298216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.298236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.298263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.392 [2024-04-25 18:18:38.298300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.392 [2024-04-25 18:18:38.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.298986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.298995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.299772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.299785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.300083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.300352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.300810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.301107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.301132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.301440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.301578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.301887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.301940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.302156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.302171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.302180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.302191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.393 [2024-04-25 18:18:38.302200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.302211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.393 [2024-04-25 18:18:38.302220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.393 [2024-04-25 18:18:38.302370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.302441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.302464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.302484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.302505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.302657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.302892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.302923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.302942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.302953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.303211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.303236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.303255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.303875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.303884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.304800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.305092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.305411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.305665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.394 [2024-04-25 18:18:38.305688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.305978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.305988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.394 [2024-04-25 18:18:38.306230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.394 [2024-04-25 18:18:38.306252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.306265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.395 [2024-04-25 18:18:38.306301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.306313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.395 [2024-04-25 18:18:38.306322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.306459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.395 [2024-04-25 18:18:38.306559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.306573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.395 [2024-04-25 18:18:38.306582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.306592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989420 is same with the state(5) to be set 00:23:40.395 [2024-04-25 18:18:38.306604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.395 [2024-04-25 18:18:38.306612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.395 [2024-04-25 18:18:38.306736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:8 PRP1 0x0 PRP2 0x0 00:23:40.395 [2024-04-25 18:18:38.306754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.307120] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x989420 was disconnected and freed. reset controller. 00:23:40.395 [2024-04-25 18:18:38.307333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.395 [2024-04-25 18:18:38.307409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.307422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.395 [2024-04-25 18:18:38.307432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.307442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.395 [2024-04-25 18:18:38.307451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.307460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.395 [2024-04-25 18:18:38.307469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.395 [2024-04-25 18:18:38.307478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942170 is same with the state(5) to be set 00:23:40.395 [2024-04-25 18:18:38.307990] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:40.395 [2024-04-25 18:18:38.308023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x942170 (9): Bad file descriptor 00:23:40.395 [2024-04-25 18:18:38.308334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.395 [2024-04-25 18:18:38.308399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.395 [2024-04-25 18:18:38.308417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x942170 with addr=10.0.0.2, port=4420 00:23:40.395 [2024-04-25 18:18:38.308548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942170 is same with the state(5) to be set 00:23:40.395 [2024-04-25 18:18:38.308686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x942170 (9): Bad file descriptor 00:23:40.395 [2024-04-25 18:18:38.308946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:40.395 [2024-04-25 18:18:38.308964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:40.395 [2024-04-25 18:18:38.309083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.395 [2024-04-25 18:18:38.309245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.395 [2024-04-25 18:18:38.309368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:40.395 18:18:38 -- host/timeout.sh@56 -- # sleep 2 00:23:42.926 [2024-04-25 18:18:40.309478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.926 [2024-04-25 18:18:40.309596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.926 [2024-04-25 18:18:40.309615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x942170 with addr=10.0.0.2, port=4420 00:23:42.926 [2024-04-25 18:18:40.309628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942170 is same with the state(5) to be set 00:23:42.926 [2024-04-25 18:18:40.309682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x942170 (9): Bad file descriptor 00:23:42.926 [2024-04-25 18:18:40.309700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.926 [2024-04-25 18:18:40.309710] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.926 [2024-04-25 18:18:40.309719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.926 [2024-04-25 18:18:40.309759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.926 [2024-04-25 18:18:40.310097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:42.926 18:18:40 -- host/timeout.sh@57 -- # get_controller 00:23:42.926 18:18:40 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.926 18:18:40 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:42.926 18:18:40 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:42.926 18:18:40 -- host/timeout.sh@58 -- # get_bdev 00:23:42.926 18:18:40 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:42.926 18:18:40 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:42.926 18:18:40 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:42.926 18:18:40 -- host/timeout.sh@61 -- # sleep 5 00:23:44.869 [2024-04-25 18:18:42.310223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.869 [2024-04-25 18:18:42.310322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.869 [2024-04-25 18:18:42.310357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x942170 with addr=10.0.0.2, port=4420 00:23:44.869 [2024-04-25 18:18:42.310370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942170 is same with the state(5) to be set 00:23:44.869 [2024-04-25 18:18:42.310393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x942170 (9): Bad file descriptor 00:23:44.869 [2024-04-25 18:18:42.310412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.869 [2024-04-25 18:18:42.310421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.869 [2024-04-25 18:18:42.310431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.869 [2024-04-25 18:18:42.310809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.869 [2024-04-25 18:18:42.310834] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.769 [2024-04-25 18:18:44.310868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.704 00:23:47.704 Latency(us) 00:23:47.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.704 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:47.704 Verification LBA range: start 0x0 length 0x4000 00:23:47.704 NVMe0n1 : 8.16 2059.52 8.05 15.69 0.00 61741.27 2517.18 7046430.72 00:23:47.704 =================================================================================================================== 00:23:47.704 Total : 2059.52 8.05 15.69 0.00 61741.27 2517.18 7046430.72 00:23:47.704 0 00:23:47.962 18:18:45 -- host/timeout.sh@62 -- # get_controller 00:23:47.962 18:18:45 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.962 18:18:45 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:48.221 18:18:46 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:48.221 18:18:46 -- host/timeout.sh@63 -- # get_bdev 00:23:48.221 18:18:46 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:48.221 18:18:46 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:48.480 18:18:46 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:48.480 18:18:46 -- host/timeout.sh@65 -- # wait 87689 00:23:48.480 18:18:46 -- host/timeout.sh@67 -- # killprocess 87641 00:23:48.480 18:18:46 -- common/autotest_common.sh@926 -- # '[' -z 87641 ']' 00:23:48.480 18:18:46 -- common/autotest_common.sh@930 -- # kill -0 87641 00:23:48.480 18:18:46 -- common/autotest_common.sh@931 -- # uname 00:23:48.481 18:18:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:48.481 18:18:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87641 00:23:48.481 18:18:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:48.481 18:18:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:48.481 killing process with pid 87641 00:23:48.481 18:18:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87641' 00:23:48.481 18:18:46 -- common/autotest_common.sh@945 -- # kill 87641 00:23:48.481 Received shutdown signal, test time was about 9.126860 seconds 00:23:48.481 00:23:48.481 Latency(us) 00:23:48.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.481 =================================================================================================================== 00:23:48.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.481 18:18:46 -- common/autotest_common.sh@950 -- # wait 87641 00:23:48.739 18:18:46 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.998 [2024-04-25 18:18:46.737712] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.998 18:18:46 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:48.998 18:18:46 -- host/timeout.sh@74 -- # bdevperf_pid=87841 00:23:48.998 18:18:46 -- host/timeout.sh@76 -- # waitforlisten 87841 /var/tmp/bdevperf.sock 00:23:48.998 18:18:46 -- common/autotest_common.sh@819 -- # '[' -z 87841 ']' 00:23:48.998 18:18:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.998 18:18:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:48.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.998 18:18:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.998 18:18:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:48.998 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:23:48.998 [2024-04-25 18:18:46.796634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:48.998 [2024-04-25 18:18:46.796700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87841 ] 00:23:48.998 [2024-04-25 18:18:46.924887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.257 [2024-04-25 18:18:47.020840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.823 18:18:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:49.823 18:18:47 -- common/autotest_common.sh@852 -- # return 0 00:23:49.823 18:18:47 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:50.082 18:18:47 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:50.340 NVMe0n1 00:23:50.340 18:18:48 -- host/timeout.sh@84 -- # rpc_pid=87893 00:23:50.340 18:18:48 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.340 18:18:48 -- host/timeout.sh@86 -- # sleep 1 00:23:50.598 Running I/O for 10 seconds... 00:23:51.534 18:18:49 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.795 [2024-04-25 18:18:49.460242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1661600 is same with the state(5) to be set 00:23:51.796 [2024-04-25 18:18:49.460975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.796 [2024-04-25 18:18:49.461417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.796 [2024-04-25 18:18:49.461429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.797 [2024-04-25 18:18:49.461843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.797 [2024-04-25 18:18:49.461957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.797 [2024-04-25 18:18:49.461976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.461986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.461996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.797 [2024-04-25 18:18:49.462199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.797 [2024-04-25 18:18:49.462209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.797 [2024-04-25 18:18:49.462218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.462943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.462986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.462996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.798 [2024-04-25 18:18:49.463005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.463015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.798 [2024-04-25 18:18:49.463024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.798 [2024-04-25 18:18:49.463034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.799 [2024-04-25 18:18:49.463464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.799 [2024-04-25 18:18:49.463600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2280420 is same with the state(5) to be set 00:23:51.799 [2024-04-25 18:18:49.463621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.799 [2024-04-25 18:18:49.463628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.799 [2024-04-25 18:18:49.463641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3928 len:8 PRP1 0x0 PRP2 0x0 00:23:51.799 [2024-04-25 18:18:49.463651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.799 [2024-04-25 18:18:49.463701] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2280420 was disconnected and freed. reset controller. 00:23:51.799 [2024-04-25 18:18:49.463985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:51.799 [2024-04-25 18:18:49.464073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:23:51.799 [2024-04-25 18:18:49.464188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.799 [2024-04-25 18:18:49.464239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.799 [2024-04-25 18:18:49.464256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:23:51.799 [2024-04-25 18:18:49.464267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:23:51.799 [2024-04-25 18:18:49.464285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:23:51.799 [2024-04-25 18:18:49.464303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:51.799 [2024-04-25 18:18:49.464313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:51.799 [2024-04-25 18:18:49.464324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:51.799 [2024-04-25 18:18:49.464344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:51.799 [2024-04-25 18:18:49.464355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:51.799 18:18:49 -- host/timeout.sh@90 -- # sleep 1 00:23:52.736 [2024-04-25 18:18:50.464477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.736 [2024-04-25 18:18:50.464563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.736 [2024-04-25 18:18:50.464580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:23:52.736 [2024-04-25 18:18:50.464594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:23:52.736 [2024-04-25 18:18:50.464616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:23:52.736 [2024-04-25 18:18:50.464634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:52.736 [2024-04-25 18:18:50.464643] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:52.736 [2024-04-25 18:18:50.464652] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:52.736 [2024-04-25 18:18:50.464677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:52.736 [2024-04-25 18:18:50.464688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:52.736 18:18:50 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.994 [2024-04-25 18:18:50.681097] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.994 18:18:50 -- host/timeout.sh@92 -- # wait 87893 00:23:53.559 [2024-04-25 18:18:51.480976] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.684 00:24:01.684 Latency(us) 00:24:01.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.684 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.684 Verification LBA range: start 0x0 length 0x4000 00:24:01.684 NVMe0n1 : 10.00 10678.10 41.71 0.00 0.00 11968.10 919.74 3019898.88 00:24:01.684 =================================================================================================================== 00:24:01.684 Total : 10678.10 41.71 0.00 0.00 11968.10 919.74 3019898.88 00:24:01.684 0 00:24:01.684 18:18:58 -- host/timeout.sh@97 -- # rpc_pid=88011 00:24:01.684 18:18:58 -- host/timeout.sh@98 -- # sleep 1 00:24:01.684 18:18:58 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.684 Running I/O for 10 seconds... 00:24:01.684 18:18:59 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.684 [2024-04-25 18:18:59.601071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.684 [2024-04-25 18:18:59.601501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.601647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be0b0 is same with the state(5) to be set 00:24:01.685 [2024-04-25 18:18:59.602118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.602966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.602975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.603812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.603823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.685 [2024-04-25 18:18:59.604187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.685 [2024-04-25 18:18:59.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.604988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.604997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.605286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.605310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.605443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.605473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.605728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.605749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.605884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.605903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.605914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.606201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.606481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.606504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.606774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.606798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.606809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.607067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.607413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.607964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.607984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.608015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.608325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.686 [2024-04-25 18:18:59.608349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.608604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.686 [2024-04-25 18:18:59.608627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.686 [2024-04-25 18:18:59.608639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.608649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.608662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.608808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.608940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.608951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.609287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.609560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.609587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.609609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.609868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.609891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.609912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.609923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.610153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.610178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.610199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.610474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.610501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.610975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.610985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.611535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.611757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.611979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.611995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.687 [2024-04-25 18:18:59.612004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.687 [2024-04-25 18:18:59.612015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.687 [2024-04-25 18:18:59.612024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.612941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.612950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.613090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.613240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.613512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.613525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.613662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.613782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.613804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.688 [2024-04-25 18:18:59.613815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.688 [2024-04-25 18:18:59.614727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.614981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b3130 is same with the state(5) to be set 00:24:01.688 [2024-04-25 18:18:59.615008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.688 [2024-04-25 18:18:59.615019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.688 [2024-04-25 18:18:59.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:24:01.688 [2024-04-25 18:18:59.615038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.688 [2024-04-25 18:18:59.615314] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b3130 was disconnected and freed. reset controller. 00:24:01.688 [2024-04-25 18:18:59.615403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.688 [2024-04-25 18:18:59.615518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.947 [2024-04-25 18:18:59.615532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.947 [2024-04-25 18:18:59.615541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.947 [2024-04-25 18:18:59.615636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.947 [2024-04-25 18:18:59.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.947 [2024-04-25 18:18:59.615659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.947 [2024-04-25 18:18:59.615668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.947 [2024-04-25 18:18:59.615677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:24:01.947 [2024-04-25 18:18:59.616232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.947 [2024-04-25 18:18:59.616265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:24:01.947 [2024-04-25 18:18:59.616385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.947 [2024-04-25 18:18:59.616642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.947 [2024-04-25 18:18:59.616674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:24:01.947 [2024-04-25 18:18:59.616686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:24:01.947 [2024-04-25 18:18:59.616707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:24:01.947 [2024-04-25 18:18:59.616724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.947 [2024-04-25 18:18:59.616862] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.947 [2024-04-25 18:18:59.616970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.947 [2024-04-25 18:18:59.617009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.947 [2024-04-25 18:18:59.617021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.947 18:18:59 -- host/timeout.sh@101 -- # sleep 3 00:24:02.881 [2024-04-25 18:19:00.617129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.881 [2024-04-25 18:19:00.617264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.881 [2024-04-25 18:19:00.617284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:24:02.881 [2024-04-25 18:19:00.617312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:24:02.881 [2024-04-25 18:19:00.617339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:24:02.881 [2024-04-25 18:19:00.617364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.881 [2024-04-25 18:19:00.617374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.881 [2024-04-25 18:19:00.617386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.881 [2024-04-25 18:19:00.617412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.881 [2024-04-25 18:19:00.617424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.815 [2024-04-25 18:19:01.617502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.815 [2024-04-25 18:19:01.617617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.815 [2024-04-25 18:19:01.617635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:24:03.815 [2024-04-25 18:19:01.617646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:24:03.815 [2024-04-25 18:19:01.617680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:24:03.815 [2024-04-25 18:19:01.617696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:03.815 [2024-04-25 18:19:01.617705] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:03.815 [2024-04-25 18:19:01.617714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.815 [2024-04-25 18:19:01.617734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.815 [2024-04-25 18:19:01.617745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.750 [2024-04-25 18:19:02.619073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.750 [2024-04-25 18:19:02.619174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.750 [2024-04-25 18:19:02.619192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239170 with addr=10.0.0.2, port=4420 00:24:04.750 [2024-04-25 18:19:02.619203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239170 is same with the state(5) to be set 00:24:04.750 [2024-04-25 18:19:02.619360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2239170 (9): Bad file descriptor 00:24:04.750 [2024-04-25 18:19:02.619800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:04.750 [2024-04-25 18:19:02.619857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:04.750 [2024-04-25 18:19:02.619883] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:04.750 [2024-04-25 18:19:02.622326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:04.750 [2024-04-25 18:19:02.622383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.750 18:19:02 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.009 [2024-04-25 18:19:02.846355] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.009 18:19:02 -- host/timeout.sh@103 -- # wait 88011 00:24:05.944 [2024-04-25 18:19:03.652873] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:11.231 00:24:11.231 Latency(us) 00:24:11.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.231 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.231 Verification LBA range: start 0x0 length 0x4000 00:24:11.231 NVMe0n1 : 10.01 9058.54 35.38 6652.51 0.00 8133.68 726.11 3035150.89 00:24:11.231 =================================================================================================================== 00:24:11.232 Total : 9058.54 35.38 6652.51 0.00 8133.68 0.00 3035150.89 00:24:11.232 0 00:24:11.232 18:19:08 -- host/timeout.sh@105 -- # killprocess 87841 00:24:11.232 18:19:08 -- common/autotest_common.sh@926 -- # '[' -z 87841 ']' 00:24:11.232 18:19:08 -- common/autotest_common.sh@930 -- # kill -0 87841 00:24:11.232 18:19:08 -- common/autotest_common.sh@931 -- # uname 00:24:11.232 18:19:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:11.232 18:19:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87841 00:24:11.232 killing process with pid 87841 00:24:11.232 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.232 00:24:11.232 Latency(us) 00:24:11.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.232 =================================================================================================================== 00:24:11.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.232 18:19:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:11.232 18:19:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:11.232 18:19:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87841' 00:24:11.232 18:19:08 -- common/autotest_common.sh@945 -- # kill 87841 00:24:11.232 18:19:08 -- common/autotest_common.sh@950 -- # wait 87841 00:24:11.232 18:19:08 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:11.232 18:19:08 -- host/timeout.sh@110 -- # bdevperf_pid=88132 00:24:11.232 18:19:08 -- host/timeout.sh@112 -- # waitforlisten 88132 /var/tmp/bdevperf.sock 00:24:11.232 18:19:08 -- common/autotest_common.sh@819 -- # '[' -z 88132 ']' 00:24:11.232 18:19:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.232 18:19:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:11.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.232 18:19:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.232 18:19:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:11.232 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.232 [2024-04-25 18:19:08.806856] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:11.232 [2024-04-25 18:19:08.806966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88132 ] 00:24:11.232 [2024-04-25 18:19:08.944157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.232 [2024-04-25 18:19:09.030495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.165 18:19:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:12.165 18:19:09 -- common/autotest_common.sh@852 -- # return 0 00:24:12.165 18:19:09 -- host/timeout.sh@116 -- # dtrace_pid=88160 00:24:12.165 18:19:09 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88132 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:12.165 18:19:09 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:12.165 18:19:10 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:12.422 NVMe0n1 00:24:12.422 18:19:10 -- host/timeout.sh@124 -- # rpc_pid=88212 00:24:12.422 18:19:10 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.422 18:19:10 -- host/timeout.sh@125 -- # sleep 1 00:24:12.679 Running I/O for 10 seconds... 00:24:13.614 18:19:11 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.874 [2024-04-25 18:19:11.569164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.874 [2024-04-25 18:19:11.569534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.569616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c17e0 is same with the state(5) to be set 00:24:13.875 [2024-04-25 18:19:11.570423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.570600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.570900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.571986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.571995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.572627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.572637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.573969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.573979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.574982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.574994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.575980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.575990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.576909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.576929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.577952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.577975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.578971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.875 [2024-04-25 18:19:11.578988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.875 [2024-04-25 18:19:11.579111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.579887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.579896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.580833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.580846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.581906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.581922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.582032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.582048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.582058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.582194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.876 [2024-04-25 18:19:11.582211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.582334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.876 [2024-04-25 18:19:11.582354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.876 [2024-04-25 18:19:11.582484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112800 len:8 PRP1 0x0 PRP2 0x0 00:24:13.876 [2024-04-25 18:19:11.582504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.582649] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e70420 was disconnected and freed. reset controller. 00:24:13.876 [2024-04-25 18:19:11.582963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.876 [2024-04-25 18:19:11.582991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.583003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.876 [2024-04-25 18:19:11.583014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.583024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.876 [2024-04-25 18:19:11.583033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.583044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.876 [2024-04-25 18:19:11.583053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.876 [2024-04-25 18:19:11.583155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29170 is same with the state(5) to be set 00:24:13.876 [2024-04-25 18:19:11.583607] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.876 [2024-04-25 18:19:11.583643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29170 (9): Bad file descriptor 00:24:13.876 [2024-04-25 18:19:11.583939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.876 [2024-04-25 18:19:11.584008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.876 [2024-04-25 18:19:11.584026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e29170 with addr=10.0.0.2, port=4420 00:24:13.876 [2024-04-25 18:19:11.584238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29170 is same with the state(5) to be set 00:24:13.876 [2024-04-25 18:19:11.584263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29170 (9): Bad file descriptor 00:24:13.876 [2024-04-25 18:19:11.584298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:13.876 [2024-04-25 18:19:11.584413] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:13.876 [2024-04-25 18:19:11.584426] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.876 [2024-04-25 18:19:11.584573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:13.876 [2024-04-25 18:19:11.584649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.876 18:19:11 -- host/timeout.sh@128 -- # wait 88212 00:24:15.775 [2024-04-25 18:19:13.584775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.775 [2024-04-25 18:19:13.584880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.775 [2024-04-25 18:19:13.584898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e29170 with addr=10.0.0.2, port=4420 00:24:15.775 [2024-04-25 18:19:13.584910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29170 is same with the state(5) to be set 00:24:15.775 [2024-04-25 18:19:13.584932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29170 (9): Bad file descriptor 00:24:15.775 [2024-04-25 18:19:13.584949] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.775 [2024-04-25 18:19:13.584958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.775 [2024-04-25 18:19:13.584967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.775 [2024-04-25 18:19:13.584989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.775 [2024-04-25 18:19:13.584999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.675 [2024-04-25 18:19:15.585096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.675 [2024-04-25 18:19:15.585182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.675 [2024-04-25 18:19:15.585200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e29170 with addr=10.0.0.2, port=4420 00:24:17.675 [2024-04-25 18:19:15.585252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29170 is same with the state(5) to be set 00:24:17.675 [2024-04-25 18:19:15.585275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29170 (9): Bad file descriptor 00:24:17.675 [2024-04-25 18:19:15.585305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.675 [2024-04-25 18:19:15.585317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.675 [2024-04-25 18:19:15.585328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.675 [2024-04-25 18:19:15.585352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.675 [2024-04-25 18:19:15.585363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.229 [2024-04-25 18:19:17.585422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:20.797 00:24:20.797 Latency(us) 00:24:20.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.797 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:20.797 NVMe0n1 : 8.15 3075.20 12.01 15.70 0.00 41470.20 2398.02 7046430.72 00:24:20.797 =================================================================================================================== 00:24:20.797 Total : 3075.20 12.01 15.70 0.00 41470.20 2398.02 7046430.72 00:24:20.797 0 00:24:20.797 18:19:18 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.797 Attaching 5 probes... 00:24:20.797 1318.612370: reset bdev controller NVMe0 00:24:20.797 1318.710446: reconnect bdev controller NVMe0 00:24:20.797 3319.688608: reconnect delay bdev controller NVMe0 00:24:20.797 3319.703907: reconnect bdev controller NVMe0 00:24:20.797 5320.033428: reconnect delay bdev controller NVMe0 00:24:20.797 5320.046022: reconnect bdev controller NVMe0 00:24:20.797 7320.402833: reconnect delay bdev controller NVMe0 00:24:20.797 7320.420258: reconnect bdev controller NVMe0 00:24:20.797 18:19:18 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:20.797 18:19:18 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:20.797 18:19:18 -- host/timeout.sh@136 -- # kill 88160 00:24:20.797 18:19:18 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:20.797 18:19:18 -- host/timeout.sh@139 -- # killprocess 88132 00:24:20.797 18:19:18 -- common/autotest_common.sh@926 -- # '[' -z 88132 ']' 00:24:20.797 18:19:18 -- common/autotest_common.sh@930 -- # kill -0 88132 00:24:20.797 18:19:18 -- common/autotest_common.sh@931 -- # uname 00:24:20.797 18:19:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:20.797 18:19:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88132 00:24:20.797 killing process with pid 88132 00:24:20.797 Received shutdown signal, test time was about 8.214452 seconds 00:24:20.797 00:24:20.797 Latency(us) 00:24:20.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.797 =================================================================================================================== 00:24:20.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.797 18:19:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:20.797 18:19:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:20.797 18:19:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88132' 00:24:20.797 18:19:18 -- common/autotest_common.sh@945 -- # kill 88132 00:24:20.797 18:19:18 -- common/autotest_common.sh@950 -- # wait 88132 00:24:21.056 18:19:18 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.314 18:19:19 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:21.314 18:19:19 -- host/timeout.sh@145 -- # nvmftestfini 00:24:21.314 18:19:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:21.314 18:19:19 -- nvmf/common.sh@116 -- # sync 00:24:21.314 18:19:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:21.314 18:19:19 -- nvmf/common.sh@119 -- # set +e 00:24:21.314 18:19:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:21.314 18:19:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:21.314 rmmod nvme_tcp 00:24:21.314 rmmod nvme_fabrics 00:24:21.314 rmmod nvme_keyring 00:24:21.314 18:19:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:21.314 18:19:19 -- nvmf/common.sh@123 -- # set -e 00:24:21.314 18:19:19 -- nvmf/common.sh@124 -- # return 0 00:24:21.314 18:19:19 -- nvmf/common.sh@477 -- # '[' -n 87550 ']' 00:24:21.314 18:19:19 -- nvmf/common.sh@478 -- # killprocess 87550 00:24:21.314 18:19:19 -- common/autotest_common.sh@926 -- # '[' -z 87550 ']' 00:24:21.314 18:19:19 -- common/autotest_common.sh@930 -- # kill -0 87550 00:24:21.314 18:19:19 -- common/autotest_common.sh@931 -- # uname 00:24:21.314 18:19:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:21.314 18:19:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87550 00:24:21.574 killing process with pid 87550 00:24:21.574 18:19:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:21.574 18:19:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:21.574 18:19:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87550' 00:24:21.574 18:19:19 -- common/autotest_common.sh@945 -- # kill 87550 00:24:21.574 18:19:19 -- common/autotest_common.sh@950 -- # wait 87550 00:24:21.574 18:19:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:21.574 18:19:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:21.574 18:19:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:21.574 18:19:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.574 18:19:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:21.574 18:19:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.574 18:19:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.574 18:19:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.833 18:19:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:21.833 00:24:21.833 real 0m46.680s 00:24:21.833 user 2m16.731s 00:24:21.833 sys 0m5.101s 00:24:21.833 18:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.833 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:21.833 ************************************ 00:24:21.833 END TEST nvmf_timeout 00:24:21.833 ************************************ 00:24:21.833 18:19:19 -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:24:21.833 18:19:19 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:21.833 18:19:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.833 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:21.833 18:19:19 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:21.833 00:24:21.833 real 18m3.018s 00:24:21.833 user 57m14.958s 00:24:21.833 sys 3m39.157s 00:24:21.833 18:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.833 ************************************ 00:24:21.833 END TEST nvmf_tcp 00:24:21.833 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:21.833 ************************************ 00:24:21.833 18:19:19 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:21.833 18:19:19 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:21.833 18:19:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:21.833 18:19:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:21.833 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:21.833 ************************************ 00:24:21.833 START TEST spdkcli_nvmf_tcp 00:24:21.833 ************************************ 00:24:21.833 18:19:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:21.833 * Looking for test storage... 00:24:21.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:21.833 18:19:19 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:21.833 18:19:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:21.833 18:19:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:21.833 18:19:19 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.833 18:19:19 -- nvmf/common.sh@7 -- # uname -s 00:24:21.833 18:19:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.833 18:19:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.833 18:19:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.833 18:19:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.833 18:19:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.833 18:19:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.833 18:19:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.833 18:19:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.833 18:19:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.833 18:19:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.092 18:19:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:22.092 18:19:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:22.092 18:19:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.092 18:19:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.092 18:19:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:22.092 18:19:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:22.092 18:19:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.092 18:19:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.092 18:19:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.092 18:19:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.092 18:19:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.092 18:19:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.092 18:19:19 -- paths/export.sh@5 -- # export PATH 00:24:22.092 18:19:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.092 18:19:19 -- nvmf/common.sh@46 -- # : 0 00:24:22.092 18:19:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:22.092 18:19:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:22.092 18:19:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:22.092 18:19:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.092 18:19:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.092 18:19:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:22.092 18:19:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:22.092 18:19:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:22.092 18:19:19 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:22.092 18:19:19 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:22.092 18:19:19 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:22.092 18:19:19 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:22.092 18:19:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:22.093 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:22.093 18:19:19 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:22.093 18:19:19 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=88429 00:24:22.093 18:19:19 -- spdkcli/common.sh@34 -- # waitforlisten 88429 00:24:22.093 18:19:19 -- common/autotest_common.sh@819 -- # '[' -z 88429 ']' 00:24:22.093 18:19:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.093 18:19:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:22.093 18:19:19 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:22.093 18:19:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.093 18:19:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:22.093 18:19:19 -- common/autotest_common.sh@10 -- # set +x 00:24:22.093 [2024-04-25 18:19:19.836916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:22.093 [2024-04-25 18:19:19.837009] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88429 ] 00:24:22.093 [2024-04-25 18:19:19.977238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.351 [2024-04-25 18:19:20.083844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:22.351 [2024-04-25 18:19:20.084462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.351 [2024-04-25 18:19:20.084471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.917 18:19:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:22.917 18:19:20 -- common/autotest_common.sh@852 -- # return 0 00:24:22.917 18:19:20 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:22.917 18:19:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:22.917 18:19:20 -- common/autotest_common.sh@10 -- # set +x 00:24:22.917 18:19:20 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:22.917 18:19:20 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:22.917 18:19:20 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:22.917 18:19:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:22.917 18:19:20 -- common/autotest_common.sh@10 -- # set +x 00:24:23.175 18:19:20 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:23.175 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:23.175 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:23.175 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:23.175 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:23.175 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:23.175 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:23.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:23.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:23.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:23.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:23.175 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:23.175 ' 00:24:23.432 [2024-04-25 18:19:21.298178] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:25.959 [2024-04-25 18:19:23.525840] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.891 [2024-04-25 18:19:24.794914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:29.414 [2024-04-25 18:19:27.136487] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:31.358 [2024-04-25 18:19:29.157932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:33.258 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:33.258 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:33.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:33.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:33.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:33.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:33.258 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:33.258 18:19:30 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:33.258 18:19:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:33.258 18:19:30 -- common/autotest_common.sh@10 -- # set +x 00:24:33.258 18:19:30 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:33.258 18:19:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:33.258 18:19:30 -- common/autotest_common.sh@10 -- # set +x 00:24:33.258 18:19:30 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:33.258 18:19:30 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:24:33.516 18:19:31 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:33.516 18:19:31 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:33.516 18:19:31 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:33.516 18:19:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:33.516 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 18:19:31 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:33.516 18:19:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:33.516 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 18:19:31 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:33.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:33.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:33.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:33.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:33.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:33.516 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:33.516 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:33.516 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:33.516 ' 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:38.777 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:38.777 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:38.777 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:38.777 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:38.777 18:19:36 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:38.777 18:19:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:38.777 18:19:36 -- common/autotest_common.sh@10 -- # set +x 00:24:38.777 18:19:36 -- spdkcli/nvmf.sh@90 -- # killprocess 88429 00:24:38.777 18:19:36 -- common/autotest_common.sh@926 -- # '[' -z 88429 ']' 00:24:38.777 18:19:36 -- common/autotest_common.sh@930 -- # kill -0 88429 00:24:38.777 18:19:36 -- common/autotest_common.sh@931 -- # uname 00:24:38.777 18:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:38.777 18:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88429 00:24:38.777 killing process with pid 88429 00:24:38.777 18:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:38.777 18:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:38.777 18:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88429' 00:24:38.777 18:19:36 -- common/autotest_common.sh@945 -- # kill 88429 00:24:38.777 [2024-04-25 18:19:36.704495] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:38.777 18:19:36 -- common/autotest_common.sh@950 -- # wait 88429 00:24:39.035 18:19:36 -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:39.035 18:19:36 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:39.035 18:19:36 -- spdkcli/common.sh@13 -- # '[' -n 88429 ']' 00:24:39.035 18:19:36 -- spdkcli/common.sh@14 -- # killprocess 88429 00:24:39.035 18:19:36 -- common/autotest_common.sh@926 -- # '[' -z 88429 ']' 00:24:39.035 Process with pid 88429 is not found 00:24:39.035 18:19:36 -- common/autotest_common.sh@930 -- # kill -0 88429 00:24:39.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88429) - No such process 00:24:39.035 18:19:36 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88429 is not found' 00:24:39.035 18:19:36 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:39.035 18:19:36 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:39.035 18:19:36 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:39.035 00:24:39.035 real 0m17.259s 00:24:39.035 user 0m36.967s 00:24:39.035 sys 0m0.934s 00:24:39.035 18:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.035 ************************************ 00:24:39.035 END TEST spdkcli_nvmf_tcp 00:24:39.035 ************************************ 00:24:39.035 18:19:36 -- common/autotest_common.sh@10 -- # set +x 00:24:39.296 18:19:36 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:39.296 18:19:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:39.296 18:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:39.296 18:19:36 -- common/autotest_common.sh@10 -- # set +x 00:24:39.296 ************************************ 00:24:39.296 START TEST nvmf_identify_passthru 00:24:39.296 ************************************ 00:24:39.296 18:19:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:39.296 * Looking for test storage... 00:24:39.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:39.296 18:19:37 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.296 18:19:37 -- nvmf/common.sh@7 -- # uname -s 00:24:39.296 18:19:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.296 18:19:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.296 18:19:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.296 18:19:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.296 18:19:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.296 18:19:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.296 18:19:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.296 18:19:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.296 18:19:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.296 18:19:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.296 18:19:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:39.296 18:19:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:39.296 18:19:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.296 18:19:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.296 18:19:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.296 18:19:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.296 18:19:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.296 18:19:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.296 18:19:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.296 18:19:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@5 -- # export PATH 00:24:39.296 18:19:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- nvmf/common.sh@46 -- # : 0 00:24:39.296 18:19:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:39.296 18:19:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:39.296 18:19:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:39.296 18:19:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.296 18:19:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.296 18:19:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:39.296 18:19:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:39.296 18:19:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:39.296 18:19:37 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.296 18:19:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.296 18:19:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.296 18:19:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.296 18:19:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- paths/export.sh@5 -- # export PATH 00:24:39.296 18:19:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.296 18:19:37 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:39.296 18:19:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:39.296 18:19:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.296 18:19:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:39.297 18:19:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:39.297 18:19:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:39.297 18:19:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.297 18:19:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:39.297 18:19:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.297 18:19:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:39.297 18:19:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:39.297 18:19:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:39.297 18:19:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:39.297 18:19:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:39.297 18:19:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:39.297 18:19:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.297 18:19:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.297 18:19:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:39.297 18:19:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:39.297 18:19:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:39.297 18:19:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:39.297 18:19:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:39.297 18:19:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.297 18:19:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:39.297 18:19:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:39.297 18:19:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:39.297 18:19:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:39.297 18:19:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:39.297 18:19:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:39.297 Cannot find device "nvmf_tgt_br" 00:24:39.297 18:19:37 -- nvmf/common.sh@154 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.297 Cannot find device "nvmf_tgt_br2" 00:24:39.297 18:19:37 -- nvmf/common.sh@155 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:39.297 18:19:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:39.297 Cannot find device "nvmf_tgt_br" 00:24:39.297 18:19:37 -- nvmf/common.sh@157 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:39.297 Cannot find device "nvmf_tgt_br2" 00:24:39.297 18:19:37 -- nvmf/common.sh@158 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:39.297 18:19:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:39.297 18:19:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.297 18:19:37 -- nvmf/common.sh@161 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.297 18:19:37 -- nvmf/common.sh@162 -- # true 00:24:39.297 18:19:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.555 18:19:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.555 18:19:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.555 18:19:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.555 18:19:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.555 18:19:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.556 18:19:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.556 18:19:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:39.556 18:19:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:39.556 18:19:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:39.556 18:19:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:39.556 18:19:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:39.556 18:19:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:39.556 18:19:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.556 18:19:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.556 18:19:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.556 18:19:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:39.556 18:19:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:39.556 18:19:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.556 18:19:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.556 18:19:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.556 18:19:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.556 18:19:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.556 18:19:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:39.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:24:39.556 00:24:39.556 --- 10.0.0.2 ping statistics --- 00:24:39.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.556 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:39.556 18:19:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:39.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:39.556 00:24:39.556 --- 10.0.0.3 ping statistics --- 00:24:39.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.556 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:39.556 18:19:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:39.556 00:24:39.556 --- 10.0.0.1 ping statistics --- 00:24:39.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.556 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:39.556 18:19:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.556 18:19:37 -- nvmf/common.sh@421 -- # return 0 00:24:39.556 18:19:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:39.556 18:19:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.556 18:19:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:39.556 18:19:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:39.556 18:19:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.556 18:19:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:39.556 18:19:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:39.556 18:19:37 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:39.556 18:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:39.556 18:19:37 -- common/autotest_common.sh@10 -- # set +x 00:24:39.556 18:19:37 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:39.556 18:19:37 -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:39.556 18:19:37 -- common/autotest_common.sh@1509 -- # local bdfs 00:24:39.556 18:19:37 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:24:39.556 18:19:37 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:24:39.556 18:19:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:39.556 18:19:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:39.556 18:19:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:39.556 18:19:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:39.556 18:19:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:39.556 18:19:37 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:24:39.556 18:19:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:24:39.556 18:19:37 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:24:39.556 18:19:37 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:24:39.556 18:19:37 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:24:39.814 18:19:37 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:39.814 18:19:37 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:24:39.814 18:19:37 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:39.814 18:19:37 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:24:39.814 18:19:37 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:39.814 18:19:37 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:24:39.814 18:19:37 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:40.072 18:19:37 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:24:40.072 18:19:37 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:40.072 18:19:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:40.072 18:19:37 -- common/autotest_common.sh@10 -- # set +x 00:24:40.072 18:19:37 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:40.072 18:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:40.072 18:19:37 -- common/autotest_common.sh@10 -- # set +x 00:24:40.072 18:19:37 -- target/identify_passthru.sh@31 -- # nvmfpid=88917 00:24:40.072 18:19:37 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:40.072 18:19:37 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.072 18:19:37 -- target/identify_passthru.sh@35 -- # waitforlisten 88917 00:24:40.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.072 18:19:37 -- common/autotest_common.sh@819 -- # '[' -z 88917 ']' 00:24:40.072 18:19:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.072 18:19:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.072 18:19:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.072 18:19:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.072 18:19:37 -- common/autotest_common.sh@10 -- # set +x 00:24:40.072 [2024-04-25 18:19:37.923539] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:40.072 [2024-04-25 18:19:37.923812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.330 [2024-04-25 18:19:38.058392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.330 [2024-04-25 18:19:38.134814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.330 [2024-04-25 18:19:38.135227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.330 [2024-04-25 18:19:38.135295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.330 [2024-04-25 18:19:38.135439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.330 [2024-04-25 18:19:38.135625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.330 [2024-04-25 18:19:38.136126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.330 [2024-04-25 18:19:38.136349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.330 [2024-04-25 18:19:38.136352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.262 18:19:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:41.262 18:19:38 -- common/autotest_common.sh@852 -- # return 0 00:24:41.263 18:19:38 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:41.263 18:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 18:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:38 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:41.263 18:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 [2024-04-25 18:19:39.005461] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 [2024-04-25 18:19:39.019705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:41.263 18:19:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 18:19:39 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 Nvme0n1 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 [2024-04-25 18:19:39.163794] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:41.263 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.263 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 [2024-04-25 18:19:39.171595] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:41.263 [ 00:24:41.263 { 00:24:41.263 "allow_any_host": true, 00:24:41.263 "hosts": [], 00:24:41.263 "listen_addresses": [], 00:24:41.263 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:41.263 "subtype": "Discovery" 00:24:41.263 }, 00:24:41.263 { 00:24:41.263 "allow_any_host": true, 00:24:41.263 "hosts": [], 00:24:41.263 "listen_addresses": [ 00:24:41.263 { 00:24:41.263 "adrfam": "IPv4", 00:24:41.263 "traddr": "10.0.0.2", 00:24:41.263 "transport": "TCP", 00:24:41.263 "trsvcid": "4420", 00:24:41.263 "trtype": "TCP" 00:24:41.263 } 00:24:41.263 ], 00:24:41.263 "max_cntlid": 65519, 00:24:41.263 "max_namespaces": 1, 00:24:41.263 "min_cntlid": 1, 00:24:41.263 "model_number": "SPDK bdev Controller", 00:24:41.263 "namespaces": [ 00:24:41.263 { 00:24:41.263 "bdev_name": "Nvme0n1", 00:24:41.263 "name": "Nvme0n1", 00:24:41.263 "nguid": "1F2159827BD2450FA38402B6A6E249DA", 00:24:41.263 "nsid": 1, 00:24:41.263 "uuid": "1f215982-7bd2-450f-a384-02b6a6e249da" 00:24:41.263 } 00:24:41.263 ], 00:24:41.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.263 "serial_number": "SPDK00000000000001", 00:24:41.263 "subtype": "NVMe" 00:24:41.263 } 00:24:41.263 ] 00:24:41.263 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.263 18:19:39 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:41.263 18:19:39 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:41.263 18:19:39 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:41.521 18:19:39 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:24:41.521 18:19:39 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:41.521 18:19:39 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:41.521 18:19:39 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:41.779 18:19:39 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:24:41.779 18:19:39 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:24:41.779 18:19:39 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:24:41.779 18:19:39 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.779 18:19:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.779 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:41.779 18:19:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.779 18:19:39 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:41.779 18:19:39 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:41.779 18:19:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:41.779 18:19:39 -- nvmf/common.sh@116 -- # sync 00:24:41.779 18:19:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:41.779 18:19:39 -- nvmf/common.sh@119 -- # set +e 00:24:41.779 18:19:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:41.779 18:19:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:41.779 rmmod nvme_tcp 00:24:41.779 rmmod nvme_fabrics 00:24:42.038 rmmod nvme_keyring 00:24:42.038 18:19:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:42.038 18:19:39 -- nvmf/common.sh@123 -- # set -e 00:24:42.038 18:19:39 -- nvmf/common.sh@124 -- # return 0 00:24:42.038 18:19:39 -- nvmf/common.sh@477 -- # '[' -n 88917 ']' 00:24:42.038 18:19:39 -- nvmf/common.sh@478 -- # killprocess 88917 00:24:42.038 18:19:39 -- common/autotest_common.sh@926 -- # '[' -z 88917 ']' 00:24:42.038 18:19:39 -- common/autotest_common.sh@930 -- # kill -0 88917 00:24:42.038 18:19:39 -- common/autotest_common.sh@931 -- # uname 00:24:42.038 18:19:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:42.038 18:19:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88917 00:24:42.038 killing process with pid 88917 00:24:42.038 18:19:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:42.038 18:19:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:42.038 18:19:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88917' 00:24:42.038 18:19:39 -- common/autotest_common.sh@945 -- # kill 88917 00:24:42.038 [2024-04-25 18:19:39.764179] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:42.038 18:19:39 -- common/autotest_common.sh@950 -- # wait 88917 00:24:42.297 18:19:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:42.297 18:19:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:42.297 18:19:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:42.297 18:19:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.297 18:19:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:42.297 18:19:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.297 18:19:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:42.297 18:19:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.297 18:19:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:42.297 ************************************ 00:24:42.297 END TEST nvmf_identify_passthru 00:24:42.297 ************************************ 00:24:42.297 00:24:42.297 real 0m3.039s 00:24:42.297 user 0m7.750s 00:24:42.297 sys 0m0.750s 00:24:42.297 18:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.297 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.297 18:19:40 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:42.297 18:19:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:42.297 18:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:42.297 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.297 ************************************ 00:24:42.297 START TEST nvmf_dif 00:24:42.297 ************************************ 00:24:42.297 18:19:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:42.297 * Looking for test storage... 00:24:42.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:42.297 18:19:40 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:42.297 18:19:40 -- nvmf/common.sh@7 -- # uname -s 00:24:42.297 18:19:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.297 18:19:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.297 18:19:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.297 18:19:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.297 18:19:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.297 18:19:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.297 18:19:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.297 18:19:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.297 18:19:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.297 18:19:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.297 18:19:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:42.297 18:19:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:24:42.297 18:19:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.297 18:19:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.297 18:19:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:42.297 18:19:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:42.297 18:19:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.297 18:19:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.297 18:19:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.298 18:19:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.298 18:19:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.298 18:19:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.298 18:19:40 -- paths/export.sh@5 -- # export PATH 00:24:42.298 18:19:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.298 18:19:40 -- nvmf/common.sh@46 -- # : 0 00:24:42.298 18:19:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:42.298 18:19:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:42.298 18:19:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:42.298 18:19:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.298 18:19:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.298 18:19:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:42.298 18:19:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:42.298 18:19:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:42.298 18:19:40 -- target/dif.sh@15 -- # NULL_META=16 00:24:42.298 18:19:40 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:42.298 18:19:40 -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:42.298 18:19:40 -- target/dif.sh@15 -- # NULL_DIF=1 00:24:42.298 18:19:40 -- target/dif.sh@135 -- # nvmftestinit 00:24:42.298 18:19:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:42.298 18:19:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.298 18:19:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:42.298 18:19:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:42.298 18:19:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:42.298 18:19:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.298 18:19:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:42.298 18:19:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.298 18:19:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:42.298 18:19:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:42.298 18:19:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:42.298 18:19:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:42.298 18:19:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:42.298 18:19:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:42.298 18:19:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.298 18:19:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.298 18:19:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:42.298 18:19:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:42.298 18:19:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:42.298 18:19:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:42.298 18:19:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:42.298 18:19:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.298 18:19:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:42.298 18:19:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:42.298 18:19:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:42.298 18:19:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:42.298 18:19:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:42.298 18:19:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:42.298 Cannot find device "nvmf_tgt_br" 00:24:42.298 18:19:40 -- nvmf/common.sh@154 -- # true 00:24:42.298 18:19:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:42.298 Cannot find device "nvmf_tgt_br2" 00:24:42.298 18:19:40 -- nvmf/common.sh@155 -- # true 00:24:42.298 18:19:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:42.298 18:19:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:42.557 Cannot find device "nvmf_tgt_br" 00:24:42.557 18:19:40 -- nvmf/common.sh@157 -- # true 00:24:42.557 18:19:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:42.557 Cannot find device "nvmf_tgt_br2" 00:24:42.557 18:19:40 -- nvmf/common.sh@158 -- # true 00:24:42.557 18:19:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:42.557 18:19:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:42.557 18:19:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:42.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.557 18:19:40 -- nvmf/common.sh@161 -- # true 00:24:42.557 18:19:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:42.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.557 18:19:40 -- nvmf/common.sh@162 -- # true 00:24:42.557 18:19:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:42.557 18:19:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:42.557 18:19:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:42.557 18:19:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:42.557 18:19:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:42.557 18:19:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:42.557 18:19:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:42.557 18:19:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:42.557 18:19:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:42.557 18:19:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:42.557 18:19:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:42.557 18:19:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:42.557 18:19:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:42.557 18:19:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:42.557 18:19:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:42.557 18:19:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:42.557 18:19:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:42.557 18:19:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:42.557 18:19:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:42.557 18:19:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:42.557 18:19:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:42.557 18:19:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:42.557 18:19:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:42.557 18:19:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:42.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:42.557 00:24:42.557 --- 10.0.0.2 ping statistics --- 00:24:42.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.557 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:42.557 18:19:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:42.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:42.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:42.816 00:24:42.816 --- 10.0.0.3 ping statistics --- 00:24:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.816 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:42.816 18:19:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:42.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:42.816 00:24:42.816 --- 10.0.0.1 ping statistics --- 00:24:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.816 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:42.816 18:19:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.816 18:19:40 -- nvmf/common.sh@421 -- # return 0 00:24:42.816 18:19:40 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:24:42.816 18:19:40 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:43.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.075 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:43.075 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:43.075 18:19:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.075 18:19:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:43.075 18:19:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:43.075 18:19:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.075 18:19:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:43.075 18:19:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:43.075 18:19:40 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:43.075 18:19:40 -- target/dif.sh@137 -- # nvmfappstart 00:24:43.075 18:19:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:43.075 18:19:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:43.075 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 18:19:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:43.075 18:19:40 -- nvmf/common.sh@469 -- # nvmfpid=89257 00:24:43.075 18:19:40 -- nvmf/common.sh@470 -- # waitforlisten 89257 00:24:43.075 18:19:40 -- common/autotest_common.sh@819 -- # '[' -z 89257 ']' 00:24:43.075 18:19:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.075 18:19:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:43.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.075 18:19:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.075 18:19:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:43.075 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 [2024-04-25 18:19:40.937064] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:43.075 [2024-04-25 18:19:40.937155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.344 [2024-04-25 18:19:41.073452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.344 [2024-04-25 18:19:41.175005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:43.344 [2024-04-25 18:19:41.175171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.344 [2024-04-25 18:19:41.175187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.344 [2024-04-25 18:19:41.175199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.344 [2024-04-25 18:19:41.175228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.293 18:19:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:44.293 18:19:41 -- common/autotest_common.sh@852 -- # return 0 00:24:44.293 18:19:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.293 18:19:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.294 18:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 18:19:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.294 18:19:41 -- target/dif.sh@139 -- # create_transport 00:24:44.294 18:19:41 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:44.294 18:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.294 18:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 [2024-04-25 18:19:41.975132] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.294 18:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.294 18:19:41 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:44.294 18:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:44.294 18:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:44.294 18:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 ************************************ 00:24:44.294 START TEST fio_dif_1_default 00:24:44.294 ************************************ 00:24:44.294 18:19:41 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:24:44.294 18:19:41 -- target/dif.sh@86 -- # create_subsystems 0 00:24:44.294 18:19:41 -- target/dif.sh@28 -- # local sub 00:24:44.294 18:19:41 -- target/dif.sh@30 -- # for sub in "$@" 00:24:44.294 18:19:41 -- target/dif.sh@31 -- # create_subsystem 0 00:24:44.294 18:19:41 -- target/dif.sh@18 -- # local sub_id=0 00:24:44.294 18:19:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:44.294 18:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.294 18:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 bdev_null0 00:24:44.294 18:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.294 18:19:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:44.294 18:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.294 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 18:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.294 18:19:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:44.294 18:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.294 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 18:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.294 18:19:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:44.294 18:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.294 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 [2024-04-25 18:19:42.023254] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.294 18:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.294 18:19:42 -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:44.294 18:19:42 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:44.294 18:19:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:44.294 18:19:42 -- nvmf/common.sh@520 -- # config=() 00:24:44.294 18:19:42 -- nvmf/common.sh@520 -- # local subsystem config 00:24:44.294 18:19:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.294 18:19:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.294 18:19:42 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.294 18:19:42 -- target/dif.sh@82 -- # gen_fio_conf 00:24:44.294 18:19:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.294 { 00:24:44.294 "params": { 00:24:44.294 "name": "Nvme$subsystem", 00:24:44.294 "trtype": "$TEST_TRANSPORT", 00:24:44.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.294 "adrfam": "ipv4", 00:24:44.294 "trsvcid": "$NVMF_PORT", 00:24:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.294 "hdgst": ${hdgst:-false}, 00:24:44.294 "ddgst": ${ddgst:-false} 00:24:44.294 }, 00:24:44.294 "method": "bdev_nvme_attach_controller" 00:24:44.294 } 00:24:44.294 EOF 00:24:44.294 )") 00:24:44.294 18:19:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:44.294 18:19:42 -- target/dif.sh@54 -- # local file 00:24:44.294 18:19:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.294 18:19:42 -- target/dif.sh@56 -- # cat 00:24:44.294 18:19:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:44.294 18:19:42 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.294 18:19:42 -- common/autotest_common.sh@1320 -- # shift 00:24:44.294 18:19:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:44.294 18:19:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.294 18:19:42 -- nvmf/common.sh@542 -- # cat 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.294 18:19:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:44.294 18:19:42 -- target/dif.sh@72 -- # (( file <= files )) 00:24:44.294 18:19:42 -- nvmf/common.sh@544 -- # jq . 00:24:44.294 18:19:42 -- nvmf/common.sh@545 -- # IFS=, 00:24:44.294 18:19:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:44.294 "params": { 00:24:44.294 "name": "Nvme0", 00:24:44.294 "trtype": "tcp", 00:24:44.294 "traddr": "10.0.0.2", 00:24:44.294 "adrfam": "ipv4", 00:24:44.294 "trsvcid": "4420", 00:24:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:44.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:44.294 "hdgst": false, 00:24:44.294 "ddgst": false 00:24:44.294 }, 00:24:44.294 "method": "bdev_nvme_attach_controller" 00:24:44.294 }' 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:44.294 18:19:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:44.294 18:19:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:44.294 18:19:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:44.294 18:19:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:44.294 18:19:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:44.294 18:19:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.553 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:44.553 fio-3.35 00:24:44.553 Starting 1 thread 00:24:44.812 [2024-04-25 18:19:42.671794] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:44.812 [2024-04-25 18:19:42.671869] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:57.017 00:24:57.017 filename0: (groupid=0, jobs=1): err= 0: pid=89346: Thu Apr 25 18:19:52 2024 00:24:57.017 read: IOPS=1327, BW=5311KiB/s (5438kB/s)(52.0MiB/10032msec) 00:24:57.017 slat (nsec): min=5950, max=73558, avg=7490.83, stdev=3201.19 00:24:57.017 clat (usec): min=339, max=42469, avg=2990.10, stdev=9901.56 00:24:57.017 lat (usec): min=345, max=42491, avg=2997.59, stdev=9901.62 00:24:57.017 clat percentiles (usec): 00:24:57.017 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:24:57.017 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:24:57.017 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 490], 95.00th=[40633], 00:24:57.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:24:57.017 | 99.99th=[42206] 00:24:57.017 bw ( KiB/s): min= 2880, max= 8704, per=100.00%, avg=5326.40, stdev=1726.30, samples=20 00:24:57.017 iops : min= 720, max= 2176, avg=1331.60, stdev=431.57, samples=20 00:24:57.017 lat (usec) : 500=90.90%, 750=2.67% 00:24:57.017 lat (msec) : 10=0.03%, 50=6.40% 00:24:57.017 cpu : usr=91.73%, sys=7.57%, ctx=18, majf=0, minf=0 00:24:57.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:57.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.017 issued rwts: total=13320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:57.017 00:24:57.017 Run status group 0 (all jobs): 00:24:57.017 READ: bw=5311KiB/s (5438kB/s), 5311KiB/s-5311KiB/s (5438kB/s-5438kB/s), io=52.0MiB (54.6MB), run=10032-10032msec 00:24:57.017 18:19:53 -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:57.017 18:19:53 -- target/dif.sh@43 -- # local sub 00:24:57.017 18:19:53 -- target/dif.sh@45 -- # for sub in "$@" 00:24:57.017 18:19:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:57.017 18:19:53 -- target/dif.sh@36 -- # local sub_id=0 00:24:57.017 18:19:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 ************************************ 00:24:57.017 END TEST fio_dif_1_default 00:24:57.017 ************************************ 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 00:24:57.017 real 0m11.069s 00:24:57.017 user 0m9.857s 00:24:57.017 sys 0m1.026s 00:24:57.017 18:19:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:57.017 18:19:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:57.017 18:19:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 ************************************ 00:24:57.017 START TEST fio_dif_1_multi_subsystems 00:24:57.017 ************************************ 00:24:57.017 18:19:53 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:24:57.017 18:19:53 -- target/dif.sh@92 -- # local files=1 00:24:57.017 18:19:53 -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:57.017 18:19:53 -- target/dif.sh@28 -- # local sub 00:24:57.017 18:19:53 -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.017 18:19:53 -- target/dif.sh@31 -- # create_subsystem 0 00:24:57.017 18:19:53 -- target/dif.sh@18 -- # local sub_id=0 00:24:57.017 18:19:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 bdev_null0 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 [2024-04-25 18:19:53.144263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.017 18:19:53 -- target/dif.sh@31 -- # create_subsystem 1 00:24:57.017 18:19:53 -- target/dif.sh@18 -- # local sub_id=1 00:24:57.017 18:19:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 bdev_null1 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.017 18:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.017 18:19:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.017 18:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.017 18:19:53 -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:57.017 18:19:53 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:57.017 18:19:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:57.017 18:19:53 -- nvmf/common.sh@520 -- # config=() 00:24:57.018 18:19:53 -- nvmf/common.sh@520 -- # local subsystem config 00:24:57.018 18:19:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.018 18:19:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.018 { 00:24:57.018 "params": { 00:24:57.018 "name": "Nvme$subsystem", 00:24:57.018 "trtype": "$TEST_TRANSPORT", 00:24:57.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.018 "adrfam": "ipv4", 00:24:57.018 "trsvcid": "$NVMF_PORT", 00:24:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.018 "hdgst": ${hdgst:-false}, 00:24:57.018 "ddgst": ${ddgst:-false} 00:24:57.018 }, 00:24:57.018 "method": "bdev_nvme_attach_controller" 00:24:57.018 } 00:24:57.018 EOF 00:24:57.018 )") 00:24:57.018 18:19:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.018 18:19:53 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.018 18:19:53 -- target/dif.sh@82 -- # gen_fio_conf 00:24:57.018 18:19:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:57.018 18:19:53 -- target/dif.sh@54 -- # local file 00:24:57.018 18:19:53 -- target/dif.sh@56 -- # cat 00:24:57.018 18:19:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.018 18:19:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:57.018 18:19:53 -- nvmf/common.sh@542 -- # cat 00:24:57.018 18:19:53 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.018 18:19:53 -- common/autotest_common.sh@1320 -- # shift 00:24:57.018 18:19:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:57.018 18:19:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.018 18:19:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:57.018 18:19:53 -- target/dif.sh@72 -- # (( file <= files )) 00:24:57.018 18:19:53 -- target/dif.sh@73 -- # cat 00:24:57.018 18:19:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.018 18:19:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.018 { 00:24:57.018 "params": { 00:24:57.018 "name": "Nvme$subsystem", 00:24:57.018 "trtype": "$TEST_TRANSPORT", 00:24:57.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.018 "adrfam": "ipv4", 00:24:57.018 "trsvcid": "$NVMF_PORT", 00:24:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.018 "hdgst": ${hdgst:-false}, 00:24:57.018 "ddgst": ${ddgst:-false} 00:24:57.018 }, 00:24:57.018 "method": "bdev_nvme_attach_controller" 00:24:57.018 } 00:24:57.018 EOF 00:24:57.018 )") 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:57.018 18:19:53 -- nvmf/common.sh@542 -- # cat 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:57.018 18:19:53 -- nvmf/common.sh@544 -- # jq . 00:24:57.018 18:19:53 -- target/dif.sh@72 -- # (( file++ )) 00:24:57.018 18:19:53 -- target/dif.sh@72 -- # (( file <= files )) 00:24:57.018 18:19:53 -- nvmf/common.sh@545 -- # IFS=, 00:24:57.018 18:19:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:57.018 "params": { 00:24:57.018 "name": "Nvme0", 00:24:57.018 "trtype": "tcp", 00:24:57.018 "traddr": "10.0.0.2", 00:24:57.018 "adrfam": "ipv4", 00:24:57.018 "trsvcid": "4420", 00:24:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:57.018 "hdgst": false, 00:24:57.018 "ddgst": false 00:24:57.018 }, 00:24:57.018 "method": "bdev_nvme_attach_controller" 00:24:57.018 },{ 00:24:57.018 "params": { 00:24:57.018 "name": "Nvme1", 00:24:57.018 "trtype": "tcp", 00:24:57.018 "traddr": "10.0.0.2", 00:24:57.018 "adrfam": "ipv4", 00:24:57.018 "trsvcid": "4420", 00:24:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.018 "hdgst": false, 00:24:57.018 "ddgst": false 00:24:57.018 }, 00:24:57.018 "method": "bdev_nvme_attach_controller" 00:24:57.018 }' 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:57.018 18:19:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:57.018 18:19:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:57.018 18:19:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:57.018 18:19:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:57.018 18:19:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:57.018 18:19:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.018 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:57.018 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:57.018 fio-3.35 00:24:57.018 Starting 2 threads 00:24:57.018 [2024-04-25 18:19:53.941417] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:57.018 [2024-04-25 18:19:53.941473] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:07.024 00:25:07.024 filename0: (groupid=0, jobs=1): err= 0: pid=89506: Thu Apr 25 18:20:04 2024 00:25:07.024 read: IOPS=237, BW=951KiB/s (974kB/s)(9536KiB/10025msec) 00:25:07.024 slat (nsec): min=6369, max=52623, avg=8088.11, stdev=3261.37 00:25:07.024 clat (usec): min=360, max=41817, avg=16796.72, stdev=19850.69 00:25:07.024 lat (usec): min=367, max=41830, avg=16804.81, stdev=19850.56 00:25:07.024 clat percentiles (usec): 00:25:07.024 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 396], 00:25:07.024 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 478], 60.00th=[40633], 00:25:07.024 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:07.024 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:07.024 | 99.99th=[41681] 00:25:07.024 bw ( KiB/s): min= 512, max= 1472, per=55.53%, avg=951.85, stdev=285.69, samples=20 00:25:07.024 iops : min= 128, max= 368, avg=237.95, stdev=71.41, samples=20 00:25:07.024 lat (usec) : 500=52.14%, 750=4.66%, 1000=2.60% 00:25:07.024 lat (msec) : 2=0.17%, 50=40.44% 00:25:07.024 cpu : usr=95.26%, sys=4.38%, ctx=10, majf=0, minf=0 00:25:07.024 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:07.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.024 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.024 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:07.024 filename1: (groupid=0, jobs=1): err= 0: pid=89507: Thu Apr 25 18:20:04 2024 00:25:07.024 read: IOPS=190, BW=762KiB/s (780kB/s)(7632KiB/10021msec) 00:25:07.024 slat (nsec): min=6383, max=45577, avg=8471.61, stdev=3652.63 00:25:07.024 clat (usec): min=365, max=41824, avg=20982.82, stdev=20228.47 00:25:07.024 lat (usec): min=372, max=41850, avg=20991.29, stdev=20228.36 00:25:07.024 clat percentiles (usec): 00:25:07.024 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 408], 00:25:07.024 | 30.00th=[ 433], 40.00th=[ 482], 50.00th=[40633], 60.00th=[40633], 00:25:07.024 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:07.024 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:07.024 | 99.99th=[41681] 00:25:07.024 bw ( KiB/s): min= 512, max= 1088, per=44.44%, avg=761.50, stdev=144.97, samples=20 00:25:07.024 iops : min= 128, max= 272, avg=190.35, stdev=36.28, samples=20 00:25:07.024 lat (usec) : 500=41.51%, 750=4.56%, 1000=2.99% 00:25:07.024 lat (msec) : 2=0.21%, 50=50.73% 00:25:07.024 cpu : usr=95.82%, sys=3.82%, ctx=7, majf=0, minf=9 00:25:07.024 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:07.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.024 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.024 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:07.024 00:25:07.024 Run status group 0 (all jobs): 00:25:07.024 READ: bw=1713KiB/s (1754kB/s), 762KiB/s-951KiB/s (780kB/s-974kB/s), io=16.8MiB (17.6MB), run=10021-10025msec 00:25:07.024 18:20:04 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:07.024 18:20:04 -- target/dif.sh@43 -- # local sub 00:25:07.024 18:20:04 -- target/dif.sh@45 -- # for sub in "$@" 00:25:07.024 18:20:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:07.024 18:20:04 -- target/dif.sh@36 -- # local sub_id=0 00:25:07.024 18:20:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@45 -- # for sub in "$@" 00:25:07.024 18:20:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:07.024 18:20:04 -- target/dif.sh@36 -- # local sub_id=1 00:25:07.024 18:20:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 ************************************ 00:25:07.024 END TEST fio_dif_1_multi_subsystems 00:25:07.024 ************************************ 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 00:25:07.024 real 0m11.239s 00:25:07.024 user 0m19.977s 00:25:07.024 sys 0m1.111s 00:25:07.024 18:20:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:07.024 18:20:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:07.024 18:20:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 ************************************ 00:25:07.024 START TEST fio_dif_rand_params 00:25:07.024 ************************************ 00:25:07.024 18:20:04 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:07.024 18:20:04 -- target/dif.sh@100 -- # local NULL_DIF 00:25:07.024 18:20:04 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:07.024 18:20:04 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:07.024 18:20:04 -- target/dif.sh@103 -- # bs=128k 00:25:07.024 18:20:04 -- target/dif.sh@103 -- # numjobs=3 00:25:07.024 18:20:04 -- target/dif.sh@103 -- # iodepth=3 00:25:07.024 18:20:04 -- target/dif.sh@103 -- # runtime=5 00:25:07.024 18:20:04 -- target/dif.sh@105 -- # create_subsystems 0 00:25:07.024 18:20:04 -- target/dif.sh@28 -- # local sub 00:25:07.024 18:20:04 -- target/dif.sh@30 -- # for sub in "$@" 00:25:07.024 18:20:04 -- target/dif.sh@31 -- # create_subsystem 0 00:25:07.024 18:20:04 -- target/dif.sh@18 -- # local sub_id=0 00:25:07.024 18:20:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 bdev_null0 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.024 18:20:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.024 18:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.024 [2024-04-25 18:20:04.454043] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.024 18:20:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.024 18:20:04 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:07.024 18:20:04 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:07.024 18:20:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:07.024 18:20:04 -- nvmf/common.sh@520 -- # config=() 00:25:07.024 18:20:04 -- nvmf/common.sh@520 -- # local subsystem config 00:25:07.024 18:20:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:07.024 18:20:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.024 18:20:04 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:07.024 18:20:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.024 { 00:25:07.024 "params": { 00:25:07.024 "name": "Nvme$subsystem", 00:25:07.024 "trtype": "$TEST_TRANSPORT", 00:25:07.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.024 "adrfam": "ipv4", 00:25:07.024 "trsvcid": "$NVMF_PORT", 00:25:07.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.024 "hdgst": ${hdgst:-false}, 00:25:07.024 "ddgst": ${ddgst:-false} 00:25:07.024 }, 00:25:07.024 "method": "bdev_nvme_attach_controller" 00:25:07.024 } 00:25:07.024 EOF 00:25:07.024 )") 00:25:07.024 18:20:04 -- target/dif.sh@82 -- # gen_fio_conf 00:25:07.024 18:20:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:07.024 18:20:04 -- target/dif.sh@54 -- # local file 00:25:07.024 18:20:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:07.024 18:20:04 -- target/dif.sh@56 -- # cat 00:25:07.024 18:20:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:07.024 18:20:04 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:07.024 18:20:04 -- common/autotest_common.sh@1320 -- # shift 00:25:07.024 18:20:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:07.024 18:20:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.024 18:20:04 -- nvmf/common.sh@542 -- # cat 00:25:07.024 18:20:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:07.024 18:20:04 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:07.025 18:20:04 -- target/dif.sh@72 -- # (( file <= files )) 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:07.025 18:20:04 -- nvmf/common.sh@544 -- # jq . 00:25:07.025 18:20:04 -- nvmf/common.sh@545 -- # IFS=, 00:25:07.025 18:20:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:07.025 "params": { 00:25:07.025 "name": "Nvme0", 00:25:07.025 "trtype": "tcp", 00:25:07.025 "traddr": "10.0.0.2", 00:25:07.025 "adrfam": "ipv4", 00:25:07.025 "trsvcid": "4420", 00:25:07.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:07.025 "hdgst": false, 00:25:07.025 "ddgst": false 00:25:07.025 }, 00:25:07.025 "method": "bdev_nvme_attach_controller" 00:25:07.025 }' 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:07.025 18:20:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:07.025 18:20:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:07.025 18:20:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:07.025 18:20:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:07.025 18:20:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:07.025 18:20:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:07.025 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:07.025 ... 00:25:07.025 fio-3.35 00:25:07.025 Starting 3 threads 00:25:07.283 [2024-04-25 18:20:05.096638] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:07.283 [2024-04-25 18:20:05.096704] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:12.552 00:25:12.552 filename0: (groupid=0, jobs=1): err= 0: pid=89663: Thu Apr 25 18:20:10 2024 00:25:12.552 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(172MiB/5005msec) 00:25:12.552 slat (nsec): min=6626, max=52643, avg=10768.67, stdev=4177.59 00:25:12.552 clat (usec): min=5431, max=53146, avg=10869.63, stdev=4744.60 00:25:12.552 lat (usec): min=5443, max=53159, avg=10880.40, stdev=4744.54 00:25:12.552 clat percentiles (usec): 00:25:12.552 | 1.00th=[ 5800], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[ 9634], 00:25:12.552 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:25:12.552 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:25:12.552 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[53216], 00:25:12.552 | 99.99th=[53216] 00:25:12.552 bw ( KiB/s): min=32191, max=37632, per=33.53%, avg=35292.33, stdev=2151.36, samples=9 00:25:12.552 iops : min= 251, max= 294, avg=275.67, stdev=16.90, samples=9 00:25:12.552 lat (msec) : 10=28.35%, 20=70.34%, 50=0.07%, 100=1.23% 00:25:12.552 cpu : usr=92.45%, sys=6.16%, ctx=5, majf=0, minf=9 00:25:12.552 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 issued rwts: total=1379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.552 filename0: (groupid=0, jobs=1): err= 0: pid=89664: Thu Apr 25 18:20:10 2024 00:25:12.552 read: IOPS=315, BW=39.4MiB/s (41.4MB/s)(197MiB/5005msec) 00:25:12.552 slat (nsec): min=6606, max=44535, avg=11194.86, stdev=3421.26 00:25:12.552 clat (usec): min=5123, max=50352, avg=9492.88, stdev=3625.33 00:25:12.552 lat (usec): min=5144, max=50362, avg=9504.07, stdev=3625.45 00:25:12.552 clat percentiles (usec): 00:25:12.552 | 1.00th=[ 5800], 5.00th=[ 6783], 10.00th=[ 7898], 20.00th=[ 8586], 00:25:12.552 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:25:12.552 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10683], 00:25:12.552 | 99.00th=[11863], 99.50th=[48497], 99.90th=[50070], 99.95th=[50594], 00:25:12.552 | 99.99th=[50594] 00:25:12.552 bw ( KiB/s): min=38400, max=42752, per=38.69%, avg=40732.44, stdev=1601.00, samples=9 00:25:12.552 iops : min= 300, max= 334, avg=318.22, stdev=12.51, samples=9 00:25:12.552 lat (msec) : 10=78.85%, 20=20.39%, 50=0.57%, 100=0.19% 00:25:12.552 cpu : usr=91.81%, sys=6.55%, ctx=4, majf=0, minf=9 00:25:12.552 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 issued rwts: total=1579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.552 filename0: (groupid=0, jobs=1): err= 0: pid=89665: Thu Apr 25 18:20:10 2024 00:25:12.552 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5004msec) 00:25:12.552 slat (nsec): min=6618, max=45656, avg=9158.92, stdev=3717.14 00:25:12.552 clat (usec): min=3677, max=57852, avg=12942.22, stdev=3659.21 00:25:12.552 lat (usec): min=3684, max=57877, avg=12951.38, stdev=3659.36 00:25:12.552 clat percentiles (usec): 00:25:12.552 | 1.00th=[ 3916], 5.00th=[ 8225], 10.00th=[ 9503], 20.00th=[12387], 00:25:12.552 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:25:12.552 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14222], 95.00th=[14615], 00:25:12.552 | 99.00th=[15139], 99.50th=[52691], 99.90th=[57410], 99.95th=[57934], 00:25:12.552 | 99.99th=[57934] 00:25:12.552 bw ( KiB/s): min=26880, max=30720, per=27.72%, avg=29184.00, stdev=1384.53, samples=9 00:25:12.552 iops : min= 210, max= 240, avg=228.00, stdev=10.82, samples=9 00:25:12.552 lat (msec) : 4=1.30%, 10=10.54%, 20=87.65%, 100=0.52% 00:25:12.552 cpu : usr=93.16%, sys=5.66%, ctx=8, majf=0, minf=9 00:25:12.552 IO depths : 1=28.8%, 2=71.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.552 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.552 00:25:12.552 Run status group 0 (all jobs): 00:25:12.552 READ: bw=103MiB/s (108MB/s), 28.9MiB/s-39.4MiB/s (30.3MB/s-41.4MB/s), io=515MiB (539MB), run=5004-5005msec 00:25:12.552 18:20:10 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:12.552 18:20:10 -- target/dif.sh@43 -- # local sub 00:25:12.552 18:20:10 -- target/dif.sh@45 -- # for sub in "$@" 00:25:12.552 18:20:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:12.552 18:20:10 -- target/dif.sh@36 -- # local sub_id=0 00:25:12.552 18:20:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:12.552 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.552 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.552 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.552 18:20:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:12.552 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.552 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.552 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # bs=4k 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # numjobs=8 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # iodepth=16 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # runtime= 00:25:12.552 18:20:10 -- target/dif.sh@109 -- # files=2 00:25:12.552 18:20:10 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:12.552 18:20:10 -- target/dif.sh@28 -- # local sub 00:25:12.552 18:20:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.552 18:20:10 -- target/dif.sh@31 -- # create_subsystem 0 00:25:12.552 18:20:10 -- target/dif.sh@18 -- # local sub_id=0 00:25:12.553 18:20:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:12.553 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.553 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.553 bdev_null0 00:25:12.553 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.553 18:20:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:12.553 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.553 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.553 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.553 18:20:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:12.553 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.553 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.553 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.553 18:20:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:12.553 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.553 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.553 [2024-04-25 18:20:10.479410] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.812 18:20:10 -- target/dif.sh@31 -- # create_subsystem 1 00:25:12.812 18:20:10 -- target/dif.sh@18 -- # local sub_id=1 00:25:12.812 18:20:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 bdev_null1 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.812 18:20:10 -- target/dif.sh@31 -- # create_subsystem 2 00:25:12.812 18:20:10 -- target/dif.sh@18 -- # local sub_id=2 00:25:12.812 18:20:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 bdev_null2 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.812 18:20:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:12.812 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.812 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.813 18:20:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:12.813 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.813 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.813 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.813 18:20:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:12.813 18:20:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.813 18:20:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.813 18:20:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.813 18:20:10 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:12.813 18:20:10 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:12.813 18:20:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:12.813 18:20:10 -- nvmf/common.sh@520 -- # config=() 00:25:12.813 18:20:10 -- nvmf/common.sh@520 -- # local subsystem config 00:25:12.813 18:20:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:12.813 18:20:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:12.813 { 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme$subsystem", 00:25:12.813 "trtype": "$TEST_TRANSPORT", 00:25:12.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "$NVMF_PORT", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.813 "hdgst": ${hdgst:-false}, 00:25:12.813 "ddgst": ${ddgst:-false} 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 } 00:25:12.813 EOF 00:25:12.813 )") 00:25:12.813 18:20:10 -- target/dif.sh@82 -- # gen_fio_conf 00:25:12.813 18:20:10 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.813 18:20:10 -- target/dif.sh@54 -- # local file 00:25:12.813 18:20:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:12.813 18:20:10 -- target/dif.sh@56 -- # cat 00:25:12.813 18:20:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # cat 00:25:12.813 18:20:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:12.813 18:20:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.813 18:20:10 -- common/autotest_common.sh@1320 -- # shift 00:25:12.813 18:20:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:12.813 18:20:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.813 18:20:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:12.813 { 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme$subsystem", 00:25:12.813 "trtype": "$TEST_TRANSPORT", 00:25:12.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "$NVMF_PORT", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.813 "hdgst": ${hdgst:-false}, 00:25:12.813 "ddgst": ${ddgst:-false} 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 } 00:25:12.813 EOF 00:25:12.813 )") 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # cat 00:25:12.813 18:20:10 -- target/dif.sh@73 -- # cat 00:25:12.813 18:20:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:12.813 { 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme$subsystem", 00:25:12.813 "trtype": "$TEST_TRANSPORT", 00:25:12.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "$NVMF_PORT", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.813 "hdgst": ${hdgst:-false}, 00:25:12.813 "ddgst": ${ddgst:-false} 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 } 00:25:12.813 EOF 00:25:12.813 )") 00:25:12.813 18:20:10 -- nvmf/common.sh@542 -- # cat 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file++ )) 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.813 18:20:10 -- target/dif.sh@73 -- # cat 00:25:12.813 18:20:10 -- nvmf/common.sh@544 -- # jq . 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file++ )) 00:25:12.813 18:20:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.813 18:20:10 -- nvmf/common.sh@545 -- # IFS=, 00:25:12.813 18:20:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme0", 00:25:12.813 "trtype": "tcp", 00:25:12.813 "traddr": "10.0.0.2", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "4420", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:12.813 "hdgst": false, 00:25:12.813 "ddgst": false 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 },{ 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme1", 00:25:12.813 "trtype": "tcp", 00:25:12.813 "traddr": "10.0.0.2", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "4420", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.813 "hdgst": false, 00:25:12.813 "ddgst": false 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 },{ 00:25:12.813 "params": { 00:25:12.813 "name": "Nvme2", 00:25:12.813 "trtype": "tcp", 00:25:12.813 "traddr": "10.0.0.2", 00:25:12.813 "adrfam": "ipv4", 00:25:12.813 "trsvcid": "4420", 00:25:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:12.813 "hdgst": false, 00:25:12.813 "ddgst": false 00:25:12.813 }, 00:25:12.813 "method": "bdev_nvme_attach_controller" 00:25:12.813 }' 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:12.813 18:20:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:12.813 18:20:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:12.813 18:20:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:12.813 18:20:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:12.813 18:20:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:12.813 18:20:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:13.072 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:13.072 ... 00:25:13.072 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:13.072 ... 00:25:13.072 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:13.072 ... 00:25:13.072 fio-3.35 00:25:13.072 Starting 24 threads 00:25:13.640 [2024-04-25 18:20:11.459504] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:13.640 [2024-04-25 18:20:11.459649] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:25.848 00:25:25.848 filename0: (groupid=0, jobs=1): err= 0: pid=89760: Thu Apr 25 18:20:21 2024 00:25:25.848 read: IOPS=244, BW=980KiB/s (1003kB/s)(9832KiB/10037msec) 00:25:25.848 slat (usec): min=6, max=8026, avg=23.11, stdev=266.79 00:25:25.848 clat (msec): min=30, max=153, avg=65.21, stdev=20.71 00:25:25.848 lat (msec): min=30, max=153, avg=65.23, stdev=20.72 00:25:25.848 clat percentiles (msec): 00:25:25.848 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 47], 00:25:25.848 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:25:25.848 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 105], 00:25:25.848 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:25:25.848 | 99.99th=[ 155] 00:25:25.848 bw ( KiB/s): min= 560, max= 1296, per=4.63%, avg=976.80, stdev=193.31, samples=20 00:25:25.848 iops : min= 140, max= 324, avg=244.20, stdev=48.33, samples=20 00:25:25.848 lat (msec) : 50=27.26%, 100=66.27%, 250=6.47% 00:25:25.848 cpu : usr=39.93%, sys=0.54%, ctx=1098, majf=0, minf=9 00:25:25.848 IO depths : 1=0.6%, 2=1.2%, 4=7.7%, 8=77.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:25.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.848 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.848 issued rwts: total=2458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.848 filename0: (groupid=0, jobs=1): err= 0: pid=89761: Thu Apr 25 18:20:21 2024 00:25:25.848 read: IOPS=240, BW=963KiB/s (986kB/s)(9644KiB/10018msec) 00:25:25.848 slat (usec): min=7, max=8022, avg=16.63, stdev=183.43 00:25:25.848 clat (msec): min=16, max=150, avg=66.32, stdev=20.70 00:25:25.848 lat (msec): min=16, max=150, avg=66.34, stdev=20.71 00:25:25.848 clat percentiles (msec): 00:25:25.848 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 50], 00:25:25.848 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:25:25.848 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 109], 00:25:25.848 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 150], 00:25:25.848 | 99.99th=[ 150] 00:25:25.848 bw ( KiB/s): min= 696, max= 1264, per=4.57%, avg=962.45, stdev=150.90, samples=20 00:25:25.848 iops : min= 174, max= 316, avg=240.60, stdev=37.73, samples=20 00:25:25.848 lat (msec) : 20=0.25%, 50=21.48%, 100=70.34%, 250=7.92% 00:25:25.848 cpu : usr=38.94%, sys=0.72%, ctx=1339, majf=0, minf=9 00:25:25.848 IO depths : 1=0.5%, 2=1.2%, 4=6.0%, 8=78.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=89.3%, 8=7.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89762: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=220, BW=881KiB/s (902kB/s)(8832KiB/10022msec) 00:25:25.849 slat (nsec): min=3731, max=50799, avg=12484.98, stdev=6791.28 00:25:25.849 clat (msec): min=24, max=164, avg=72.48, stdev=24.13 00:25:25.849 lat (msec): min=24, max=164, avg=72.50, stdev=24.13 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 50], 00:25:25.849 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:25:25.849 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 117], 00:25:25.849 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:25:25.849 | 99.99th=[ 165] 00:25:25.849 bw ( KiB/s): min= 512, max= 1248, per=4.16%, avg=876.80, stdev=172.31, samples=20 00:25:25.849 iops : min= 128, max= 312, avg=219.20, stdev=43.08, samples=20 00:25:25.849 lat (msec) : 50=20.43%, 100=66.08%, 250=13.50% 00:25:25.849 cpu : usr=32.58%, sys=0.61%, ctx=893, majf=0, minf=9 00:25:25.849 IO depths : 1=1.0%, 2=2.1%, 4=8.1%, 8=76.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89763: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=214, BW=858KiB/s (879kB/s)(8608KiB/10029msec) 00:25:25.849 slat (usec): min=3, max=3993, avg=13.46, stdev=86.05 00:25:25.849 clat (msec): min=30, max=180, avg=74.42, stdev=24.30 00:25:25.849 lat (msec): min=30, max=180, avg=74.43, stdev=24.30 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 52], 00:25:25.849 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 80], 00:25:25.849 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:25:25.849 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 182], 99.95th=[ 182], 00:25:25.849 | 99.99th=[ 182] 00:25:25.849 bw ( KiB/s): min= 592, max= 1200, per=4.06%, avg=856.80, stdev=172.62, samples=20 00:25:25.849 iops : min= 148, max= 300, avg=214.20, stdev=43.15, samples=20 00:25:25.849 lat (msec) : 50=18.36%, 100=67.24%, 250=14.41% 00:25:25.849 cpu : usr=37.75%, sys=0.67%, ctx=1120, majf=0, minf=9 00:25:25.849 IO depths : 1=0.8%, 2=2.0%, 4=9.6%, 8=74.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89764: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=241, BW=964KiB/s (987kB/s)(9668KiB/10026msec) 00:25:25.849 slat (usec): min=3, max=8016, avg=15.01, stdev=162.96 00:25:25.849 clat (msec): min=25, max=179, avg=66.27, stdev=23.49 00:25:25.849 lat (msec): min=25, max=179, avg=66.28, stdev=23.50 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 48], 00:25:25.849 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 67], 00:25:25.849 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 117], 00:25:25.849 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 180], 99.95th=[ 180], 00:25:25.849 | 99.99th=[ 180] 00:25:25.849 bw ( KiB/s): min= 560, max= 1344, per=4.56%, avg=960.40, stdev=214.21, samples=20 00:25:25.849 iops : min= 140, max= 336, avg=240.10, stdev=53.55, samples=20 00:25:25.849 lat (msec) : 50=27.27%, 100=63.59%, 250=9.14% 00:25:25.849 cpu : usr=38.85%, sys=0.61%, ctx=1091, majf=0, minf=9 00:25:25.849 IO depths : 1=0.9%, 2=2.1%, 4=10.3%, 8=74.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=90.1%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89765: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=202, BW=809KiB/s (829kB/s)(8120KiB/10032msec) 00:25:25.849 slat (usec): min=6, max=8029, avg=24.47, stdev=251.56 00:25:25.849 clat (msec): min=27, max=166, avg=78.89, stdev=23.32 00:25:25.849 lat (msec): min=27, max=166, avg=78.91, stdev=23.31 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 61], 00:25:25.849 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 83], 00:25:25.849 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 121], 00:25:25.849 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:25:25.849 | 99.99th=[ 167] 00:25:25.849 bw ( KiB/s): min= 560, max= 1072, per=3.82%, avg=805.65, stdev=155.85, samples=20 00:25:25.849 iops : min= 140, max= 268, avg=201.40, stdev=38.98, samples=20 00:25:25.849 lat (msec) : 50=8.52%, 100=73.40%, 250=18.08% 00:25:25.849 cpu : usr=37.95%, sys=0.61%, ctx=1077, majf=0, minf=9 00:25:25.849 IO depths : 1=2.5%, 2=5.9%, 4=16.0%, 8=65.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89766: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=204, BW=818KiB/s (838kB/s)(8204KiB/10027msec) 00:25:25.849 slat (usec): min=5, max=8027, avg=28.52, stdev=353.49 00:25:25.849 clat (msec): min=29, max=173, avg=78.08, stdev=21.75 00:25:25.849 lat (msec): min=29, max=173, avg=78.10, stdev=21.75 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 61], 00:25:25.849 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 81], 00:25:25.849 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 120], 00:25:25.849 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 174], 00:25:25.849 | 99.99th=[ 174] 00:25:25.849 bw ( KiB/s): min= 592, max= 1024, per=3.87%, avg=814.05, stdev=118.64, samples=20 00:25:25.849 iops : min= 148, max= 256, avg=203.50, stdev=29.66, samples=20 00:25:25.849 lat (msec) : 50=5.27%, 100=78.89%, 250=15.85% 00:25:25.849 cpu : usr=33.11%, sys=0.63%, ctx=1193, majf=0, minf=9 00:25:25.849 IO depths : 1=1.6%, 2=3.7%, 4=11.2%, 8=71.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename0: (groupid=0, jobs=1): err= 0: pid=89767: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=216, BW=866KiB/s (887kB/s)(8704KiB/10052msec) 00:25:25.849 slat (usec): min=3, max=8067, avg=24.90, stdev=298.01 00:25:25.849 clat (msec): min=13, max=162, avg=73.60, stdev=24.40 00:25:25.849 lat (msec): min=13, max=162, avg=73.63, stdev=24.40 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 55], 00:25:25.849 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 75], 00:25:25.849 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 122], 00:25:25.849 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 163], 00:25:25.849 | 99.99th=[ 163] 00:25:25.849 bw ( KiB/s): min= 600, max= 1200, per=4.10%, avg=864.00, stdev=192.60, samples=20 00:25:25.849 iops : min= 150, max= 300, avg=216.00, stdev=48.15, samples=20 00:25:25.849 lat (msec) : 20=0.74%, 50=15.62%, 100=70.36%, 250=13.28% 00:25:25.849 cpu : usr=37.21%, sys=0.47%, ctx=990, majf=0, minf=9 00:25:25.849 IO depths : 1=1.7%, 2=4.0%, 4=12.5%, 8=70.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename1: (groupid=0, jobs=1): err= 0: pid=89768: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=247, BW=990KiB/s (1014kB/s)(9948KiB/10044msec) 00:25:25.849 slat (nsec): min=3748, max=66216, avg=12376.06, stdev=7692.92 00:25:25.849 clat (usec): min=1615, max=142400, avg=64487.17, stdev=23767.62 00:25:25.849 lat (usec): min=1622, max=142421, avg=64499.55, stdev=23768.00 00:25:25.849 clat percentiles (usec): 00:25:25.849 | 1.00th=[ 1713], 5.00th=[ 32637], 10.00th=[ 39060], 20.00th=[ 46924], 00:25:25.849 | 30.00th=[ 52167], 40.00th=[ 59507], 50.00th=[ 63177], 60.00th=[ 67634], 00:25:25.849 | 70.00th=[ 71828], 80.00th=[ 82314], 90.00th=[ 95945], 95.00th=[108528], 00:25:25.849 | 99.00th=[124257], 99.50th=[141558], 99.90th=[141558], 99.95th=[141558], 00:25:25.849 | 99.99th=[141558] 00:25:25.849 bw ( KiB/s): min= 608, max= 1904, per=4.69%, avg=987.60, stdev=271.41, samples=20 00:25:25.849 iops : min= 152, max= 476, avg=246.90, stdev=67.85, samples=20 00:25:25.849 lat (msec) : 2=1.29%, 4=0.64%, 10=0.92%, 20=0.36%, 50=24.29% 00:25:25.849 lat (msec) : 100=64.46%, 250=8.04% 00:25:25.849 cpu : usr=43.26%, sys=0.82%, ctx=1167, majf=0, minf=9 00:25:25.849 IO depths : 1=1.0%, 2=2.2%, 4=8.8%, 8=75.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.849 issued rwts: total=2487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.849 filename1: (groupid=0, jobs=1): err= 0: pid=89769: Thu Apr 25 18:20:21 2024 00:25:25.849 read: IOPS=221, BW=887KiB/s (908kB/s)(8904KiB/10036msec) 00:25:25.849 slat (usec): min=6, max=4056, avg=20.15, stdev=170.24 00:25:25.849 clat (msec): min=26, max=175, avg=71.91, stdev=23.86 00:25:25.849 lat (msec): min=26, max=175, avg=71.93, stdev=23.86 00:25:25.849 clat percentiles (msec): 00:25:25.849 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 54], 00:25:25.849 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 72], 00:25:25.849 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 121], 00:25:25.850 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:25:25.850 | 99.99th=[ 176] 00:25:25.850 bw ( KiB/s): min= 512, max= 1256, per=4.21%, avg=886.40, stdev=188.70, samples=20 00:25:25.850 iops : min= 128, max= 314, avg=221.60, stdev=47.18, samples=20 00:25:25.850 lat (msec) : 50=16.76%, 100=70.93%, 250=12.31% 00:25:25.850 cpu : usr=38.14%, sys=0.72%, ctx=1254, majf=0, minf=9 00:25:25.850 IO depths : 1=1.9%, 2=4.4%, 4=12.8%, 8=69.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89770: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=196, BW=788KiB/s (806kB/s)(7896KiB/10026msec) 00:25:25.850 slat (nsec): min=3563, max=65607, avg=12958.03, stdev=6851.50 00:25:25.850 clat (msec): min=36, max=162, avg=81.18, stdev=22.36 00:25:25.850 lat (msec): min=36, max=162, avg=81.19, stdev=22.36 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:25:25.850 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:25:25.850 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 129], 00:25:25.850 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:25:25.850 | 99.99th=[ 163] 00:25:25.850 bw ( KiB/s): min= 592, max= 1024, per=3.72%, avg=783.20, stdev=146.64, samples=20 00:25:25.850 iops : min= 148, max= 256, avg=195.80, stdev=36.66, samples=20 00:25:25.850 lat (msec) : 50=7.55%, 100=74.77%, 250=17.68% 00:25:25.850 cpu : usr=32.64%, sys=0.51%, ctx=891, majf=0, minf=9 00:25:25.850 IO depths : 1=1.8%, 2=3.8%, 4=12.2%, 8=70.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89771: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=243, BW=974KiB/s (998kB/s)(9796KiB/10054msec) 00:25:25.850 slat (usec): min=3, max=4023, avg=21.52, stdev=198.30 00:25:25.850 clat (msec): min=8, max=180, avg=65.43, stdev=24.24 00:25:25.850 lat (msec): min=8, max=180, avg=65.45, stdev=24.24 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 45], 00:25:25.850 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 68], 00:25:25.850 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 112], 00:25:25.850 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 182], 99.95th=[ 182], 00:25:25.850 | 99.99th=[ 182] 00:25:25.850 bw ( KiB/s): min= 608, max= 1256, per=4.62%, avg=973.20, stdev=210.96, samples=20 00:25:25.850 iops : min= 152, max= 314, avg=243.30, stdev=52.74, samples=20 00:25:25.850 lat (msec) : 10=0.65%, 20=0.65%, 50=30.09%, 100=58.47%, 250=10.13% 00:25:25.850 cpu : usr=46.06%, sys=0.70%, ctx=1169, majf=0, minf=9 00:25:25.850 IO depths : 1=0.8%, 2=1.7%, 4=7.9%, 8=76.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=2449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89772: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=202, BW=809KiB/s (829kB/s)(8116KiB/10026msec) 00:25:25.850 slat (usec): min=4, max=8065, avg=26.99, stdev=295.35 00:25:25.850 clat (msec): min=33, max=159, avg=78.81, stdev=22.74 00:25:25.850 lat (msec): min=33, max=159, avg=78.84, stdev=22.73 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:25:25.850 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 85], 00:25:25.850 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 125], 00:25:25.850 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:25:25.850 | 99.99th=[ 159] 00:25:25.850 bw ( KiB/s): min= 528, max= 1024, per=3.82%, avg=805.20, stdev=129.89, samples=20 00:25:25.850 iops : min= 132, max= 256, avg=201.30, stdev=32.47, samples=20 00:25:25.850 lat (msec) : 50=7.05%, 100=77.08%, 250=15.87% 00:25:25.850 cpu : usr=37.58%, sys=0.61%, ctx=995, majf=0, minf=9 00:25:25.850 IO depths : 1=1.9%, 2=4.8%, 4=15.0%, 8=66.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89773: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=201, BW=806KiB/s (826kB/s)(8076KiB/10014msec) 00:25:25.850 slat (usec): min=6, max=8032, avg=17.70, stdev=178.63 00:25:25.850 clat (msec): min=31, max=163, avg=79.21, stdev=22.82 00:25:25.850 lat (msec): min=31, max=163, avg=79.23, stdev=22.82 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 61], 00:25:25.850 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 83], 00:25:25.850 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 127], 00:25:25.850 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 163], 99.95th=[ 163], 00:25:25.850 | 99.99th=[ 163] 00:25:25.850 bw ( KiB/s): min= 512, max= 1024, per=3.78%, avg=797.47, stdev=131.21, samples=19 00:25:25.850 iops : min= 128, max= 256, avg=199.37, stdev=32.80, samples=19 00:25:25.850 lat (msec) : 50=7.03%, 100=75.53%, 250=17.43% 00:25:25.850 cpu : usr=38.46%, sys=0.60%, ctx=1049, majf=0, minf=9 00:25:25.850 IO depths : 1=2.7%, 2=6.3%, 4=16.8%, 8=64.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89774: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=208, BW=835KiB/s (855kB/s)(8380KiB/10032msec) 00:25:25.850 slat (usec): min=6, max=8018, avg=20.14, stdev=232.59 00:25:25.850 clat (msec): min=24, max=141, avg=76.47, stdev=23.94 00:25:25.850 lat (msec): min=24, max=141, avg=76.49, stdev=23.94 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 58], 00:25:25.850 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 81], 00:25:25.850 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:25:25.850 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:25:25.850 | 99.99th=[ 142] 00:25:25.850 bw ( KiB/s): min= 512, max= 1072, per=3.95%, avg=831.65, stdev=156.06, samples=20 00:25:25.850 iops : min= 128, max= 268, avg=207.90, stdev=39.02, samples=20 00:25:25.850 lat (msec) : 50=13.46%, 100=69.55%, 250=16.99% 00:25:25.850 cpu : usr=33.29%, sys=0.75%, ctx=1156, majf=0, minf=9 00:25:25.850 IO depths : 1=1.5%, 2=3.2%, 4=10.8%, 8=72.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename1: (groupid=0, jobs=1): err= 0: pid=89775: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=195, BW=783KiB/s (802kB/s)(7844KiB/10014msec) 00:25:25.850 slat (usec): min=3, max=9023, avg=36.33, stdev=424.09 00:25:25.850 clat (msec): min=32, max=157, avg=81.50, stdev=23.92 00:25:25.850 lat (msec): min=32, max=157, avg=81.54, stdev=23.90 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 62], 00:25:25.850 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 86], 00:25:25.850 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 128], 00:25:25.850 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:25:25.850 | 99.99th=[ 159] 00:25:25.850 bw ( KiB/s): min= 592, max= 1056, per=3.69%, avg=778.05, stdev=133.61, samples=20 00:25:25.850 iops : min= 148, max= 264, avg=194.50, stdev=33.40, samples=20 00:25:25.850 lat (msec) : 50=6.12%, 100=70.02%, 250=23.87% 00:25:25.850 cpu : usr=36.61%, sys=0.54%, ctx=993, majf=0, minf=9 00:25:25.850 IO depths : 1=2.3%, 2=5.1%, 4=15.1%, 8=66.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 complete : 0=0.0%, 4=91.6%, 8=3.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.850 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.850 filename2: (groupid=0, jobs=1): err= 0: pid=89776: Thu Apr 25 18:20:21 2024 00:25:25.850 read: IOPS=192, BW=770KiB/s (789kB/s)(7720KiB/10022msec) 00:25:25.850 slat (usec): min=4, max=8041, avg=23.48, stdev=241.71 00:25:25.850 clat (msec): min=23, max=163, avg=82.84, stdev=23.22 00:25:25.850 lat (msec): min=23, max=163, avg=82.86, stdev=23.21 00:25:25.850 clat percentiles (msec): 00:25:25.850 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 64], 00:25:25.850 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 88], 00:25:25.850 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 126], 00:25:25.850 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 163], 00:25:25.850 | 99.99th=[ 163] 00:25:25.850 bw ( KiB/s): min= 552, max= 1152, per=3.65%, avg=768.00, stdev=146.89, samples=20 00:25:25.850 iops : min= 138, max= 288, avg=192.00, stdev=36.72, samples=20 00:25:25.850 lat (msec) : 50=5.18%, 100=69.07%, 250=25.75% 00:25:25.850 cpu : usr=44.05%, sys=0.74%, ctx=1214, majf=0, minf=9 00:25:25.850 IO depths : 1=4.0%, 2=8.8%, 4=21.0%, 8=57.7%, 16=8.4%, 32=0.0%, >=64=0.0% 00:25:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=92.8%, 8=1.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89777: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=202, BW=810KiB/s (830kB/s)(8112KiB/10011msec) 00:25:25.851 slat (usec): min=6, max=4040, avg=15.07, stdev=102.43 00:25:25.851 clat (msec): min=17, max=156, avg=78.88, stdev=22.00 00:25:25.851 lat (msec): min=17, max=156, avg=78.90, stdev=22.00 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 63], 00:25:25.851 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 82], 00:25:25.851 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 120], 00:25:25.851 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:25:25.851 | 99.99th=[ 157] 00:25:25.851 bw ( KiB/s): min= 512, max= 1024, per=3.80%, avg=800.00, stdev=117.82, samples=19 00:25:25.851 iops : min= 128, max= 256, avg=200.00, stdev=29.45, samples=19 00:25:25.851 lat (msec) : 20=0.30%, 50=4.98%, 100=76.68%, 250=18.05% 00:25:25.851 cpu : usr=39.93%, sys=0.62%, ctx=1322, majf=0, minf=9 00:25:25.851 IO depths : 1=2.6%, 2=6.2%, 4=16.7%, 8=64.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89778: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=225, BW=902KiB/s (923kB/s)(9052KiB/10038msec) 00:25:25.851 slat (usec): min=7, max=8028, avg=18.18, stdev=223.91 00:25:25.851 clat (msec): min=25, max=164, avg=70.80, stdev=22.16 00:25:25.851 lat (msec): min=25, max=164, avg=70.82, stdev=22.16 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 52], 00:25:25.851 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:25:25.851 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 108], 00:25:25.851 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:25:25.851 | 99.99th=[ 165] 00:25:25.851 bw ( KiB/s): min= 688, max= 1168, per=4.26%, avg=898.80, stdev=143.75, samples=20 00:25:25.851 iops : min= 172, max= 292, avg=224.70, stdev=35.94, samples=20 00:25:25.851 lat (msec) : 50=19.66%, 100=68.01%, 250=12.33% 00:25:25.851 cpu : usr=33.55%, sys=0.52%, ctx=1183, majf=0, minf=9 00:25:25.851 IO depths : 1=1.3%, 2=2.9%, 4=10.6%, 8=73.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89779: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=244, BW=978KiB/s (1001kB/s)(9848KiB/10072msec) 00:25:25.851 slat (usec): min=5, max=8023, avg=21.48, stdev=279.56 00:25:25.851 clat (usec): min=720, max=155799, avg=65241.49, stdev=24392.08 00:25:25.851 lat (usec): min=728, max=155806, avg=65262.98, stdev=24406.27 00:25:25.851 clat percentiles (usec): 00:25:25.851 | 1.00th=[ 1598], 5.00th=[ 5932], 10.00th=[ 39584], 20.00th=[ 47973], 00:25:25.851 | 30.00th=[ 57410], 40.00th=[ 60031], 50.00th=[ 63701], 60.00th=[ 70779], 00:25:25.851 | 70.00th=[ 71828], 80.00th=[ 84411], 90.00th=[ 95945], 95.00th=[106431], 00:25:25.851 | 99.00th=[120062], 99.50th=[128451], 99.90th=[156238], 99.95th=[156238], 00:25:25.851 | 99.99th=[156238] 00:25:25.851 bw ( KiB/s): min= 720, max= 2048, per=4.64%, avg=978.20, stdev=275.04, samples=20 00:25:25.851 iops : min= 180, max= 512, avg=244.50, stdev=68.80, samples=20 00:25:25.851 lat (usec) : 750=0.08% 00:25:25.851 lat (msec) : 2=1.91%, 4=1.91%, 10=1.95%, 50=16.94%, 100=71.12% 00:25:25.851 lat (msec) : 250=6.09% 00:25:25.851 cpu : usr=32.66%, sys=0.69%, ctx=904, majf=0, minf=0 00:25:25.851 IO depths : 1=0.5%, 2=1.4%, 4=8.4%, 8=76.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89780: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=230, BW=920KiB/s (942kB/s)(9216KiB/10014msec) 00:25:25.851 slat (usec): min=3, max=8029, avg=27.70, stdev=343.69 00:25:25.851 clat (msec): min=21, max=152, avg=69.37, stdev=21.86 00:25:25.851 lat (msec): min=21, max=152, avg=69.40, stdev=21.87 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 49], 00:25:25.851 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:25:25.851 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 109], 00:25:25.851 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:25:25.851 | 99.99th=[ 153] 00:25:25.851 bw ( KiB/s): min= 640, max= 1168, per=4.36%, avg=919.20, stdev=152.50, samples=20 00:25:25.851 iops : min= 160, max= 292, avg=229.80, stdev=38.12, samples=20 00:25:25.851 lat (msec) : 50=21.96%, 100=67.88%, 250=10.16% 00:25:25.851 cpu : usr=37.93%, sys=0.71%, ctx=1060, majf=0, minf=9 00:25:25.851 IO depths : 1=0.9%, 2=2.2%, 4=8.6%, 8=75.0%, 16=13.4%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=90.0%, 8=6.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89781: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=251, BW=1004KiB/s (1028kB/s)(9.83MiB/10022msec) 00:25:25.851 slat (usec): min=3, max=4073, avg=16.55, stdev=139.13 00:25:25.851 clat (msec): min=23, max=144, avg=63.64, stdev=20.88 00:25:25.851 lat (msec): min=23, max=144, avg=63.66, stdev=20.88 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:25:25.851 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 66], 00:25:25.851 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 104], 00:25:25.851 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:25:25.851 | 99.99th=[ 144] 00:25:25.851 bw ( KiB/s): min= 640, max= 1280, per=4.75%, avg=1000.00, stdev=191.02, samples=20 00:25:25.851 iops : min= 160, max= 320, avg=250.00, stdev=47.75, samples=20 00:25:25.851 lat (msec) : 50=33.55%, 100=60.33%, 250=6.12% 00:25:25.851 cpu : usr=44.87%, sys=0.82%, ctx=1206, majf=0, minf=9 00:25:25.851 IO depths : 1=0.5%, 2=1.3%, 4=6.8%, 8=78.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=89.5%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89782: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=238, BW=952KiB/s (975kB/s)(9560KiB/10039msec) 00:25:25.851 slat (usec): min=4, max=8024, avg=17.17, stdev=183.44 00:25:25.851 clat (msec): min=6, max=142, avg=67.02, stdev=22.01 00:25:25.851 lat (msec): min=6, max=142, avg=67.04, stdev=22.02 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 50], 00:25:25.851 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 69], 00:25:25.851 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 109], 00:25:25.851 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:25:25.851 | 99.99th=[ 142] 00:25:25.851 bw ( KiB/s): min= 640, max= 1456, per=4.51%, avg=949.60, stdev=183.99, samples=20 00:25:25.851 iops : min= 160, max= 364, avg=237.40, stdev=46.00, samples=20 00:25:25.851 lat (msec) : 10=0.67%, 20=0.67%, 50=20.04%, 100=70.54%, 250=8.08% 00:25:25.851 cpu : usr=44.72%, sys=0.78%, ctx=1329, majf=0, minf=9 00:25:25.851 IO depths : 1=1.4%, 2=3.3%, 4=11.4%, 8=72.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:25.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.851 issued rwts: total=2390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.851 filename2: (groupid=0, jobs=1): err= 0: pid=89783: Thu Apr 25 18:20:21 2024 00:25:25.851 read: IOPS=198, BW=795KiB/s (814kB/s)(7972KiB/10024msec) 00:25:25.851 slat (usec): min=4, max=8053, avg=19.80, stdev=201.37 00:25:25.851 clat (msec): min=28, max=163, avg=80.25, stdev=23.13 00:25:25.851 lat (msec): min=28, max=163, avg=80.27, stdev=23.12 00:25:25.851 clat percentiles (msec): 00:25:25.851 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 63], 00:25:25.851 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 85], 00:25:25.851 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 130], 00:25:25.851 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 165], 00:25:25.851 | 99.99th=[ 165] 00:25:25.851 bw ( KiB/s): min= 600, max= 1152, per=3.77%, avg=793.00, stdev=146.69, samples=20 00:25:25.851 iops : min= 150, max= 288, avg=198.25, stdev=36.67, samples=20 00:25:25.851 lat (msec) : 50=6.32%, 100=73.56%, 250=20.12% 00:25:25.851 cpu : usr=41.55%, sys=0.59%, ctx=1410, majf=0, minf=9 00:25:25.851 IO depths : 1=2.1%, 2=4.4%, 4=11.9%, 8=69.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:25.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.852 complete : 0=0.0%, 4=90.9%, 8=5.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.852 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.852 00:25:25.852 Run status group 0 (all jobs): 00:25:25.852 READ: bw=20.6MiB/s (21.6MB/s), 770KiB/s-1004KiB/s (789kB/s-1028kB/s), io=207MiB (217MB), run=10011-10072msec 00:25:25.852 18:20:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:25.852 18:20:21 -- target/dif.sh@43 -- # local sub 00:25:25.852 18:20:21 -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.852 18:20:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:25.852 18:20:21 -- target/dif.sh@36 -- # local sub_id=0 00:25:25.852 18:20:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.852 18:20:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:25.852 18:20:21 -- target/dif.sh@36 -- # local sub_id=1 00:25:25.852 18:20:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.852 18:20:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:25.852 18:20:21 -- target/dif.sh@36 -- # local sub_id=2 00:25:25.852 18:20:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # numjobs=2 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # iodepth=8 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # runtime=5 00:25:25.852 18:20:21 -- target/dif.sh@115 -- # files=1 00:25:25.852 18:20:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:25.852 18:20:21 -- target/dif.sh@28 -- # local sub 00:25:25.852 18:20:21 -- target/dif.sh@30 -- # for sub in "$@" 00:25:25.852 18:20:21 -- target/dif.sh@31 -- # create_subsystem 0 00:25:25.852 18:20:21 -- target/dif.sh@18 -- # local sub_id=0 00:25:25.852 18:20:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 bdev_null0 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.852 18:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 [2024-04-25 18:20:22.005923] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.852 18:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:22 -- target/dif.sh@30 -- # for sub in "$@" 00:25:25.852 18:20:22 -- target/dif.sh@31 -- # create_subsystem 1 00:25:25.852 18:20:22 -- target/dif.sh@18 -- # local sub_id=1 00:25:25.852 18:20:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:25.852 18:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 bdev_null1 00:25:25.852 18:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:25.852 18:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:25.852 18:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.852 18:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.852 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:25.852 18:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.852 18:20:22 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:25.852 18:20:22 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:25.852 18:20:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:25.852 18:20:22 -- nvmf/common.sh@520 -- # config=() 00:25:25.852 18:20:22 -- nvmf/common.sh@520 -- # local subsystem config 00:25:25.852 18:20:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:25.852 18:20:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:25.852 { 00:25:25.852 "params": { 00:25:25.852 "name": "Nvme$subsystem", 00:25:25.852 "trtype": "$TEST_TRANSPORT", 00:25:25.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.852 "adrfam": "ipv4", 00:25:25.852 "trsvcid": "$NVMF_PORT", 00:25:25.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.852 "hdgst": ${hdgst:-false}, 00:25:25.852 "ddgst": ${ddgst:-false} 00:25:25.852 }, 00:25:25.852 "method": "bdev_nvme_attach_controller" 00:25:25.852 } 00:25:25.852 EOF 00:25:25.852 )") 00:25:25.852 18:20:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.852 18:20:22 -- target/dif.sh@82 -- # gen_fio_conf 00:25:25.852 18:20:22 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.852 18:20:22 -- target/dif.sh@54 -- # local file 00:25:25.852 18:20:22 -- target/dif.sh@56 -- # cat 00:25:25.852 18:20:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:25.852 18:20:22 -- nvmf/common.sh@542 -- # cat 00:25:25.852 18:20:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:25.852 18:20:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:25.852 18:20:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.852 18:20:22 -- common/autotest_common.sh@1320 -- # shift 00:25:25.852 18:20:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:25.852 18:20:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.852 18:20:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:25.852 18:20:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.852 18:20:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:25.852 18:20:22 -- target/dif.sh@72 -- # (( file <= files )) 00:25:25.852 18:20:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:25.852 18:20:22 -- target/dif.sh@73 -- # cat 00:25:25.852 18:20:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:25.852 18:20:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:25.852 { 00:25:25.852 "params": { 00:25:25.852 "name": "Nvme$subsystem", 00:25:25.852 "trtype": "$TEST_TRANSPORT", 00:25:25.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.852 "adrfam": "ipv4", 00:25:25.852 "trsvcid": "$NVMF_PORT", 00:25:25.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.852 "hdgst": ${hdgst:-false}, 00:25:25.852 "ddgst": ${ddgst:-false} 00:25:25.852 }, 00:25:25.852 "method": "bdev_nvme_attach_controller" 00:25:25.852 } 00:25:25.852 EOF 00:25:25.852 )") 00:25:25.852 18:20:22 -- target/dif.sh@72 -- # (( file++ )) 00:25:25.852 18:20:22 -- nvmf/common.sh@542 -- # cat 00:25:25.852 18:20:22 -- target/dif.sh@72 -- # (( file <= files )) 00:25:25.852 18:20:22 -- nvmf/common.sh@544 -- # jq . 00:25:25.852 18:20:22 -- nvmf/common.sh@545 -- # IFS=, 00:25:25.852 18:20:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:25.852 "params": { 00:25:25.852 "name": "Nvme0", 00:25:25.852 "trtype": "tcp", 00:25:25.852 "traddr": "10.0.0.2", 00:25:25.852 "adrfam": "ipv4", 00:25:25.852 "trsvcid": "4420", 00:25:25.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:25.852 "hdgst": false, 00:25:25.852 "ddgst": false 00:25:25.852 }, 00:25:25.852 "method": "bdev_nvme_attach_controller" 00:25:25.852 },{ 00:25:25.852 "params": { 00:25:25.852 "name": "Nvme1", 00:25:25.852 "trtype": "tcp", 00:25:25.852 "traddr": "10.0.0.2", 00:25:25.852 "adrfam": "ipv4", 00:25:25.852 "trsvcid": "4420", 00:25:25.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.853 "hdgst": false, 00:25:25.853 "ddgst": false 00:25:25.853 }, 00:25:25.853 "method": "bdev_nvme_attach_controller" 00:25:25.853 }' 00:25:25.853 18:20:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:25.853 18:20:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:25.853 18:20:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.853 18:20:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.853 18:20:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:25.853 18:20:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:25.853 18:20:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:25.853 18:20:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:25.853 18:20:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:25.853 18:20:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.853 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:25.853 ... 00:25:25.853 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:25.853 ... 00:25:25.853 fio-3.35 00:25:25.853 Starting 4 threads 00:25:25.853 [2024-04-25 18:20:22.734246] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:25.853 [2024-04-25 18:20:22.734892] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:30.036 00:25:30.036 filename0: (groupid=0, jobs=1): err= 0: pid=89915: Thu Apr 25 18:20:27 2024 00:25:30.036 read: IOPS=2167, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5001msec) 00:25:30.036 slat (nsec): min=6700, max=98911, avg=16298.01, stdev=6316.54 00:25:30.036 clat (usec): min=2671, max=5347, avg=3610.56, stdev=154.24 00:25:30.036 lat (usec): min=2681, max=5374, avg=3626.86, stdev=154.65 00:25:30.036 clat percentiles (usec): 00:25:30.036 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3490], 00:25:30.036 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:25:30.036 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:25:30.036 | 99.00th=[ 4047], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[ 5276], 00:25:30.036 | 99.99th=[ 5342] 00:25:30.036 bw ( KiB/s): min=16896, max=17664, per=24.96%, avg=17354.89, stdev=251.67, samples=9 00:25:30.036 iops : min= 2112, max= 2208, avg=2169.33, stdev=31.50, samples=9 00:25:30.036 lat (msec) : 4=98.34%, 10=1.66% 00:25:30.036 cpu : usr=94.66%, sys=4.22%, ctx=5, majf=0, minf=0 00:25:30.036 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 issued rwts: total=10840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.036 filename0: (groupid=0, jobs=1): err= 0: pid=89916: Thu Apr 25 18:20:27 2024 00:25:30.036 read: IOPS=2190, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5002msec) 00:25:30.036 slat (nsec): min=6062, max=89520, avg=9560.25, stdev=5233.87 00:25:30.036 clat (usec): min=659, max=4311, avg=3604.75, stdev=279.81 00:25:30.036 lat (usec): min=666, max=4319, avg=3614.31, stdev=279.77 00:25:30.036 clat percentiles (usec): 00:25:30.036 | 1.00th=[ 1942], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3523], 00:25:30.036 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:25:30.036 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 3916], 00:25:30.036 | 99.00th=[ 4047], 99.50th=[ 4113], 99.90th=[ 4228], 99.95th=[ 4228], 00:25:30.036 | 99.99th=[ 4293] 00:25:30.036 bw ( KiB/s): min=17280, max=17664, per=25.17%, avg=17504.00, stdev=120.80, samples=9 00:25:30.036 iops : min= 2160, max= 2208, avg=2188.00, stdev=15.10, samples=9 00:25:30.036 lat (usec) : 750=0.03%, 1000=0.07% 00:25:30.036 lat (msec) : 2=0.97%, 4=97.24%, 10=1.69% 00:25:30.036 cpu : usr=93.70%, sys=4.86%, ctx=4, majf=0, minf=0 00:25:30.036 IO depths : 1=8.8%, 2=21.7%, 4=53.0%, 8=16.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 issued rwts: total=10959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.036 filename1: (groupid=0, jobs=1): err= 0: pid=89917: Thu Apr 25 18:20:27 2024 00:25:30.036 read: IOPS=2167, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5001msec) 00:25:30.036 slat (usec): min=6, max=103, avg=15.91, stdev= 6.96 00:25:30.036 clat (usec): min=2629, max=6042, avg=3613.98, stdev=159.76 00:25:30.036 lat (usec): min=2658, max=6070, avg=3629.89, stdev=159.92 00:25:30.036 clat percentiles (usec): 00:25:30.036 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3490], 00:25:30.036 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:25:30.036 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:25:30.036 | 99.00th=[ 4047], 99.50th=[ 4113], 99.90th=[ 4424], 99.95th=[ 5997], 00:25:30.036 | 99.99th=[ 6063] 00:25:30.036 bw ( KiB/s): min=16896, max=17664, per=24.95%, avg=17351.11, stdev=256.89, samples=9 00:25:30.036 iops : min= 2112, max= 2208, avg=2168.89, stdev=32.11, samples=9 00:25:30.036 lat (msec) : 4=98.30%, 10=1.70% 00:25:30.036 cpu : usr=94.64%, sys=4.06%, ctx=26, majf=0, minf=0 00:25:30.036 IO depths : 1=12.2%, 2=24.9%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.036 issued rwts: total=10840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.036 filename1: (groupid=0, jobs=1): err= 0: pid=89918: Thu Apr 25 18:20:27 2024 00:25:30.036 read: IOPS=2167, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5001msec) 00:25:30.036 slat (nsec): min=3291, max=71529, avg=15507.53, stdev=6405.62 00:25:30.036 clat (usec): min=1287, max=7233, avg=3624.78, stdev=238.62 00:25:30.036 lat (usec): min=1294, max=7264, avg=3640.28, stdev=238.99 00:25:30.036 clat percentiles (usec): 00:25:30.036 | 1.00th=[ 3032], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3523], 00:25:30.036 | 30.00th=[ 3556], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:25:30.036 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3851], 95.00th=[ 3949], 00:25:30.036 | 99.00th=[ 4293], 99.50th=[ 4555], 99.90th=[ 5932], 99.95th=[ 6849], 00:25:30.036 | 99.99th=[ 7242] 00:25:30.036 bw ( KiB/s): min=16864, max=17664, per=24.95%, avg=17347.56, stdev=280.20, samples=9 00:25:30.037 iops : min= 2108, max= 2208, avg=2168.44, stdev=35.03, samples=9 00:25:30.037 lat (msec) : 2=0.13%, 4=95.95%, 10=3.92% 00:25:30.037 cpu : usr=95.06%, sys=3.60%, ctx=13, majf=0, minf=0 00:25:30.037 IO depths : 1=7.8%, 2=16.2%, 4=58.7%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.037 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.037 issued rwts: total=10838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.037 00:25:30.037 Run status group 0 (all jobs): 00:25:30.037 READ: bw=67.9MiB/s (71.2MB/s), 16.9MiB/s-17.1MiB/s (17.8MB/s-17.9MB/s), io=340MiB (356MB), run=5001-5002msec 00:25:30.295 18:20:28 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:30.295 18:20:28 -- target/dif.sh@43 -- # local sub 00:25:30.295 18:20:28 -- target/dif.sh@45 -- # for sub in "$@" 00:25:30.295 18:20:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:30.295 18:20:28 -- target/dif.sh@36 -- # local sub_id=0 00:25:30.295 18:20:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@45 -- # for sub in "$@" 00:25:30.295 18:20:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:30.295 18:20:28 -- target/dif.sh@36 -- # local sub_id=1 00:25:30.295 18:20:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 ************************************ 00:25:30.295 END TEST fio_dif_rand_params 00:25:30.295 ************************************ 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 00:25:30.295 real 0m23.698s 00:25:30.295 user 2m7.671s 00:25:30.295 sys 0m4.177s 00:25:30.295 18:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:30.295 18:20:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:30.295 18:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 ************************************ 00:25:30.295 START TEST fio_dif_digest 00:25:30.295 ************************************ 00:25:30.295 18:20:28 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:25:30.295 18:20:28 -- target/dif.sh@123 -- # local NULL_DIF 00:25:30.295 18:20:28 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:30.295 18:20:28 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:30.295 18:20:28 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:30.295 18:20:28 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:30.295 18:20:28 -- target/dif.sh@127 -- # numjobs=3 00:25:30.295 18:20:28 -- target/dif.sh@127 -- # iodepth=3 00:25:30.295 18:20:28 -- target/dif.sh@127 -- # runtime=10 00:25:30.295 18:20:28 -- target/dif.sh@128 -- # hdgst=true 00:25:30.295 18:20:28 -- target/dif.sh@128 -- # ddgst=true 00:25:30.295 18:20:28 -- target/dif.sh@130 -- # create_subsystems 0 00:25:30.295 18:20:28 -- target/dif.sh@28 -- # local sub 00:25:30.295 18:20:28 -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.295 18:20:28 -- target/dif.sh@31 -- # create_subsystem 0 00:25:30.295 18:20:28 -- target/dif.sh@18 -- # local sub_id=0 00:25:30.295 18:20:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 bdev_null0 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.295 18:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.295 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.295 [2024-04-25 18:20:28.203200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.295 18:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.295 18:20:28 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:30.295 18:20:28 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:30.295 18:20:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:30.295 18:20:28 -- nvmf/common.sh@520 -- # config=() 00:25:30.295 18:20:28 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.295 18:20:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.295 18:20:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.295 18:20:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.295 { 00:25:30.295 "params": { 00:25:30.295 "name": "Nvme$subsystem", 00:25:30.295 "trtype": "$TEST_TRANSPORT", 00:25:30.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.295 "adrfam": "ipv4", 00:25:30.295 "trsvcid": "$NVMF_PORT", 00:25:30.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.295 "hdgst": ${hdgst:-false}, 00:25:30.295 "ddgst": ${ddgst:-false} 00:25:30.295 }, 00:25:30.295 "method": "bdev_nvme_attach_controller" 00:25:30.296 } 00:25:30.296 EOF 00:25:30.296 )") 00:25:30.296 18:20:28 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.296 18:20:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:30.296 18:20:28 -- target/dif.sh@82 -- # gen_fio_conf 00:25:30.296 18:20:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:30.296 18:20:28 -- target/dif.sh@54 -- # local file 00:25:30.296 18:20:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:30.296 18:20:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.296 18:20:28 -- target/dif.sh@56 -- # cat 00:25:30.296 18:20:28 -- common/autotest_common.sh@1320 -- # shift 00:25:30.296 18:20:28 -- nvmf/common.sh@542 -- # cat 00:25:30.296 18:20:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:30.296 18:20:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.296 18:20:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.296 18:20:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:30.296 18:20:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.296 18:20:28 -- nvmf/common.sh@544 -- # jq . 00:25:30.296 18:20:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:30.296 18:20:28 -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.296 18:20:28 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.296 18:20:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.296 "params": { 00:25:30.296 "name": "Nvme0", 00:25:30.296 "trtype": "tcp", 00:25:30.296 "traddr": "10.0.0.2", 00:25:30.296 "adrfam": "ipv4", 00:25:30.296 "trsvcid": "4420", 00:25:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:30.296 "hdgst": true, 00:25:30.296 "ddgst": true 00:25:30.296 }, 00:25:30.296 "method": "bdev_nvme_attach_controller" 00:25:30.296 }' 00:25:30.554 18:20:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.554 18:20:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.554 18:20:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.554 18:20:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:30.554 18:20:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:30.554 18:20:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:30.554 18:20:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:30.554 18:20:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:30.554 18:20:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:30.554 18:20:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.554 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:30.554 ... 00:25:30.554 fio-3.35 00:25:30.554 Starting 3 threads 00:25:31.119 [2024-04-25 18:20:28.765005] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:31.119 [2024-04-25 18:20:28.765089] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:41.090 00:25:41.090 filename0: (groupid=0, jobs=1): err= 0: pid=90024: Thu Apr 25 18:20:38 2024 00:25:41.090 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(227MiB/10005msec) 00:25:41.090 slat (nsec): min=6796, max=57902, avg=12362.82, stdev=4863.98 00:25:41.090 clat (usec): min=9114, max=20727, avg=16536.62, stdev=1367.66 00:25:41.090 lat (usec): min=9127, max=20738, avg=16548.99, stdev=1368.64 00:25:41.090 clat percentiles (usec): 00:25:41.090 | 1.00th=[14091], 5.00th=[14484], 10.00th=[14877], 20.00th=[15270], 00:25:41.090 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16450], 60.00th=[16909], 00:25:41.090 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18482], 95.00th=[19006], 00:25:41.090 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20579], 99.95th=[20841], 00:25:41.090 | 99.99th=[20841] 00:25:41.090 bw ( KiB/s): min=20992, max=25394, per=27.21%, avg=23170.50, stdev=1595.87, samples=20 00:25:41.090 iops : min= 164, max= 198, avg=181.00, stdev=12.44, samples=20 00:25:41.090 lat (msec) : 10=0.06%, 20=99.72%, 50=0.22% 00:25:41.090 cpu : usr=93.27%, sys=5.46%, ctx=94, majf=0, minf=9 00:25:41.090 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.090 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.090 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:41.090 filename0: (groupid=0, jobs=1): err= 0: pid=90025: Thu Apr 25 18:20:38 2024 00:25:41.090 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(325MiB/10007msec) 00:25:41.090 slat (nsec): min=6843, max=64337, avg=12230.75, stdev=4116.16 00:25:41.090 clat (usec): min=7170, max=14716, avg=11529.33, stdev=1091.46 00:25:41.090 lat (usec): min=7181, max=14728, avg=11541.56, stdev=1092.18 00:25:41.090 clat percentiles (usec): 00:25:41.090 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:25:41.090 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:25:41.090 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:25:41.090 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14353], 99.95th=[14615], 00:25:41.090 | 99.99th=[14746] 00:25:41.090 bw ( KiB/s): min=29952, max=36352, per=39.05%, avg=33254.40, stdev=2313.56, samples=20 00:25:41.090 iops : min= 234, max= 284, avg=259.80, stdev=18.07, samples=20 00:25:41.090 lat (msec) : 10=6.77%, 20=93.23% 00:25:41.091 cpu : usr=91.85%, sys=6.67%, ctx=21, majf=0, minf=0 00:25:41.091 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.091 issued rwts: total=2600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.091 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:41.091 filename0: (groupid=0, jobs=1): err= 0: pid=90026: Thu Apr 25 18:20:38 2024 00:25:41.091 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(281MiB/10005msec) 00:25:41.091 slat (nsec): min=6861, max=62731, avg=11884.38, stdev=4573.72 00:25:41.091 clat (usec): min=6019, max=18100, avg=13357.01, stdev=1358.48 00:25:41.091 lat (usec): min=6029, max=18113, avg=13368.89, stdev=1358.89 00:25:41.091 clat percentiles (usec): 00:25:41.091 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:25:41.091 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13829], 00:25:41.091 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:25:41.091 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:25:41.091 | 99.99th=[18220] 00:25:41.091 bw ( KiB/s): min=25344, max=32000, per=33.69%, avg=28684.80, stdev=2195.10, samples=20 00:25:41.091 iops : min= 198, max= 250, avg=224.10, stdev=17.15, samples=20 00:25:41.091 lat (msec) : 10=0.09%, 20=99.91% 00:25:41.091 cpu : usr=93.24%, sys=5.41%, ctx=6, majf=0, minf=0 00:25:41.091 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:41.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.091 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.091 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:41.091 00:25:41.091 Run status group 0 (all jobs): 00:25:41.091 READ: bw=83.2MiB/s (87.2MB/s), 22.7MiB/s-32.5MiB/s (23.8MB/s-34.1MB/s), io=832MiB (873MB), run=10005-10007msec 00:25:41.349 18:20:39 -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:41.349 18:20:39 -- target/dif.sh@43 -- # local sub 00:25:41.349 18:20:39 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.349 18:20:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:41.349 18:20:39 -- target/dif.sh@36 -- # local sub_id=0 00:25:41.350 18:20:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.350 18:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.350 18:20:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.350 18:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.350 18:20:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:41.350 18:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.350 18:20:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.350 ************************************ 00:25:41.350 END TEST fio_dif_digest 00:25:41.350 ************************************ 00:25:41.350 18:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.350 00:25:41.350 real 0m11.012s 00:25:41.350 user 0m28.545s 00:25:41.350 sys 0m1.996s 00:25:41.350 18:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.350 18:20:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.350 18:20:39 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:41.350 18:20:39 -- target/dif.sh@147 -- # nvmftestfini 00:25:41.350 18:20:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:41.350 18:20:39 -- nvmf/common.sh@116 -- # sync 00:25:41.350 18:20:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:41.350 18:20:39 -- nvmf/common.sh@119 -- # set +e 00:25:41.350 18:20:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:41.350 18:20:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:41.350 rmmod nvme_tcp 00:25:41.608 rmmod nvme_fabrics 00:25:41.608 rmmod nvme_keyring 00:25:41.608 18:20:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:41.608 18:20:39 -- nvmf/common.sh@123 -- # set -e 00:25:41.608 18:20:39 -- nvmf/common.sh@124 -- # return 0 00:25:41.608 18:20:39 -- nvmf/common.sh@477 -- # '[' -n 89257 ']' 00:25:41.608 18:20:39 -- nvmf/common.sh@478 -- # killprocess 89257 00:25:41.608 18:20:39 -- common/autotest_common.sh@926 -- # '[' -z 89257 ']' 00:25:41.608 18:20:39 -- common/autotest_common.sh@930 -- # kill -0 89257 00:25:41.608 18:20:39 -- common/autotest_common.sh@931 -- # uname 00:25:41.608 18:20:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:41.608 18:20:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89257 00:25:41.608 killing process with pid 89257 00:25:41.608 18:20:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:41.608 18:20:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:41.608 18:20:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89257' 00:25:41.608 18:20:39 -- common/autotest_common.sh@945 -- # kill 89257 00:25:41.608 18:20:39 -- common/autotest_common.sh@950 -- # wait 89257 00:25:41.867 18:20:39 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:25:41.867 18:20:39 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:42.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:42.125 Waiting for block devices as requested 00:25:42.125 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:42.125 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:25:42.384 18:20:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:42.384 18:20:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:42.384 18:20:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.384 18:20:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:42.384 18:20:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.384 18:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.384 18:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.384 18:20:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:42.384 00:25:42.384 real 1m0.085s 00:25:42.384 user 3m53.536s 00:25:42.384 sys 0m13.619s 00:25:42.384 18:20:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.384 18:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:42.384 ************************************ 00:25:42.384 END TEST nvmf_dif 00:25:42.384 ************************************ 00:25:42.384 18:20:40 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:42.384 18:20:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.384 18:20:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.384 18:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:42.384 ************************************ 00:25:42.384 START TEST nvmf_abort_qd_sizes 00:25:42.384 ************************************ 00:25:42.384 18:20:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:42.384 * Looking for test storage... 00:25:42.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:42.384 18:20:40 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.384 18:20:40 -- nvmf/common.sh@7 -- # uname -s 00:25:42.384 18:20:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.384 18:20:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.384 18:20:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.384 18:20:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.384 18:20:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.384 18:20:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.384 18:20:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.384 18:20:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.384 18:20:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.384 18:20:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.384 18:20:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:25:42.384 18:20:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 00:25:42.384 18:20:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.384 18:20:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.384 18:20:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:42.384 18:20:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.384 18:20:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.384 18:20:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.384 18:20:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.384 18:20:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.384 18:20:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.384 18:20:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.384 18:20:40 -- paths/export.sh@5 -- # export PATH 00:25:42.384 18:20:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.384 18:20:40 -- nvmf/common.sh@46 -- # : 0 00:25:42.385 18:20:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.385 18:20:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.385 18:20:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.385 18:20:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.385 18:20:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.385 18:20:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.385 18:20:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.385 18:20:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.385 18:20:40 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:25:42.385 18:20:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:42.385 18:20:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.385 18:20:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:42.385 18:20:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:42.385 18:20:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:42.385 18:20:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.385 18:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.385 18:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.385 18:20:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:42.385 18:20:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:42.385 18:20:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:42.385 18:20:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:42.385 18:20:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:42.385 18:20:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:42.385 18:20:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.385 18:20:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.385 18:20:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:42.385 18:20:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:42.385 18:20:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:42.385 18:20:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:42.385 18:20:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:42.385 18:20:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.385 18:20:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:42.385 18:20:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:42.385 18:20:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:42.385 18:20:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:42.385 18:20:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:42.643 18:20:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:42.643 Cannot find device "nvmf_tgt_br" 00:25:42.643 18:20:40 -- nvmf/common.sh@154 -- # true 00:25:42.643 18:20:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:42.643 Cannot find device "nvmf_tgt_br2" 00:25:42.643 18:20:40 -- nvmf/common.sh@155 -- # true 00:25:42.643 18:20:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:42.643 18:20:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:42.643 Cannot find device "nvmf_tgt_br" 00:25:42.643 18:20:40 -- nvmf/common.sh@157 -- # true 00:25:42.643 18:20:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:42.643 Cannot find device "nvmf_tgt_br2" 00:25:42.643 18:20:40 -- nvmf/common.sh@158 -- # true 00:25:42.643 18:20:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:42.644 18:20:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:42.644 18:20:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:42.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:42.644 18:20:40 -- nvmf/common.sh@161 -- # true 00:25:42.644 18:20:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:42.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:42.644 18:20:40 -- nvmf/common.sh@162 -- # true 00:25:42.644 18:20:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:42.644 18:20:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:42.644 18:20:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:42.644 18:20:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:42.644 18:20:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:42.644 18:20:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:42.644 18:20:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:42.644 18:20:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:42.644 18:20:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:42.644 18:20:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:42.644 18:20:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:42.644 18:20:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:42.644 18:20:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:42.644 18:20:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:42.644 18:20:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:42.644 18:20:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:42.644 18:20:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:42.644 18:20:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:42.644 18:20:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:42.902 18:20:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:42.902 18:20:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:42.902 18:20:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:42.902 18:20:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:42.902 18:20:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:42.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:25:42.902 00:25:42.902 --- 10.0.0.2 ping statistics --- 00:25:42.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.902 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:42.902 18:20:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:42.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:42.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:25:42.902 00:25:42.902 --- 10.0.0.3 ping statistics --- 00:25:42.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.902 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:42.902 18:20:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:42.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:42.902 00:25:42.902 --- 10.0.0.1 ping statistics --- 00:25:42.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.902 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:42.902 18:20:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.902 18:20:40 -- nvmf/common.sh@421 -- # return 0 00:25:42.902 18:20:40 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:42.902 18:20:40 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:43.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:43.470 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:43.729 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:25:43.729 18:20:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.729 18:20:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:43.729 18:20:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:43.729 18:20:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.729 18:20:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:43.729 18:20:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:43.729 18:20:41 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:25:43.729 18:20:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:43.729 18:20:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:43.729 18:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:43.729 18:20:41 -- nvmf/common.sh@469 -- # nvmfpid=90623 00:25:43.729 18:20:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:43.729 18:20:41 -- nvmf/common.sh@470 -- # waitforlisten 90623 00:25:43.729 18:20:41 -- common/autotest_common.sh@819 -- # '[' -z 90623 ']' 00:25:43.729 18:20:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.729 18:20:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:43.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.729 18:20:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.729 18:20:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:43.729 18:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:43.729 [2024-04-25 18:20:41.589140] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:43.729 [2024-04-25 18:20:41.589235] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.988 [2024-04-25 18:20:41.731544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.988 [2024-04-25 18:20:41.836773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:43.988 [2024-04-25 18:20:41.837197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.988 [2024-04-25 18:20:41.837403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.988 [2024-04-25 18:20:41.837586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.988 [2024-04-25 18:20:41.837836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.988 [2024-04-25 18:20:41.837936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.988 [2024-04-25 18:20:41.838175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.988 [2024-04-25 18:20:41.838185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.925 18:20:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:44.925 18:20:42 -- common/autotest_common.sh@852 -- # return 0 00:25:44.925 18:20:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:44.925 18:20:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 18:20:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:25:44.925 18:20:42 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:44.925 18:20:42 -- scripts/common.sh@312 -- # local nvmes 00:25:44.925 18:20:42 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:44.925 18:20:42 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:44.925 18:20:42 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:44.925 18:20:42 -- scripts/common.sh@297 -- # local bdf= 00:25:44.925 18:20:42 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:44.925 18:20:42 -- scripts/common.sh@232 -- # local class 00:25:44.925 18:20:42 -- scripts/common.sh@233 -- # local subclass 00:25:44.925 18:20:42 -- scripts/common.sh@234 -- # local progif 00:25:44.925 18:20:42 -- scripts/common.sh@235 -- # printf %02x 1 00:25:44.925 18:20:42 -- scripts/common.sh@235 -- # class=01 00:25:44.925 18:20:42 -- scripts/common.sh@236 -- # printf %02x 8 00:25:44.925 18:20:42 -- scripts/common.sh@236 -- # subclass=08 00:25:44.925 18:20:42 -- scripts/common.sh@237 -- # printf %02x 2 00:25:44.925 18:20:42 -- scripts/common.sh@237 -- # progif=02 00:25:44.925 18:20:42 -- scripts/common.sh@239 -- # hash lspci 00:25:44.925 18:20:42 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:44.925 18:20:42 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:44.925 18:20:42 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:44.925 18:20:42 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:44.925 18:20:42 -- scripts/common.sh@244 -- # tr -d '"' 00:25:44.925 18:20:42 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:44.925 18:20:42 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:44.925 18:20:42 -- scripts/common.sh@15 -- # local i 00:25:44.925 18:20:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:44.925 18:20:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:44.925 18:20:42 -- scripts/common.sh@24 -- # return 0 00:25:44.925 18:20:42 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:44.925 18:20:42 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:44.925 18:20:42 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:25:44.925 18:20:42 -- scripts/common.sh@15 -- # local i 00:25:44.925 18:20:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:25:44.925 18:20:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:44.925 18:20:42 -- scripts/common.sh@24 -- # return 0 00:25:44.925 18:20:42 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:25:44.925 18:20:42 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:44.925 18:20:42 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:44.925 18:20:42 -- scripts/common.sh@322 -- # uname -s 00:25:44.925 18:20:42 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:44.925 18:20:42 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:44.925 18:20:42 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:44.925 18:20:42 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:25:44.925 18:20:42 -- scripts/common.sh@322 -- # uname -s 00:25:44.925 18:20:42 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:44.925 18:20:42 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:44.925 18:20:42 -- scripts/common.sh@327 -- # (( 2 )) 00:25:44.925 18:20:42 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:25:44.925 18:20:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:44.925 18:20:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 ************************************ 00:25:44.925 START TEST spdk_target_abort 00:25:44.925 ************************************ 00:25:44.925 18:20:42 -- common/autotest_common.sh@1104 -- # spdk_target 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:25:44.925 18:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 spdk_targetn1 00:25:44.925 18:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.925 18:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 [2024-04-25 18:20:42.741634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.925 18:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:25:44.925 18:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 18:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:25:44.925 18:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 18:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:25:44.925 18:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.925 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:44.925 [2024-04-25 18:20:42.773832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.925 18:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:44.925 18:20:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:25:48.243 Initializing NVMe Controllers 00:25:48.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:25:48.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:25:48.243 Initialization complete. Launching workers. 00:25:48.244 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9636, failed: 0 00:25:48.244 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1185, failed to submit 8451 00:25:48.244 success 749, unsuccess 436, failed 0 00:25:48.244 18:20:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:48.244 18:20:46 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:25:51.528 [2024-04-25 18:20:49.237325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06c20 is same with the state(5) to be set 00:25:51.528 [2024-04-25 18:20:49.237391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06c20 is same with the state(5) to be set 00:25:51.528 [2024-04-25 18:20:49.237403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06c20 is same with the state(5) to be set 00:25:51.528 [2024-04-25 18:20:49.237412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06c20 is same with the state(5) to be set 00:25:51.528 [2024-04-25 18:20:49.237420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06c20 is same with the state(5) to be set 00:25:51.528 Initializing NVMe Controllers 00:25:51.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:25:51.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:25:51.528 Initialization complete. Launching workers. 00:25:51.528 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6054, failed: 0 00:25:51.528 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1248, failed to submit 4806 00:25:51.528 success 297, unsuccess 951, failed 0 00:25:51.528 18:20:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:51.528 18:20:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:25:54.814 Initializing NVMe Controllers 00:25:54.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:25:54.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:25:54.815 Initialization complete. Launching workers. 00:25:54.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30868, failed: 0 00:25:54.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2555, failed to submit 28313 00:25:54.815 success 476, unsuccess 2079, failed 0 00:25:54.815 18:20:52 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:25:54.815 18:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.815 18:20:52 -- common/autotest_common.sh@10 -- # set +x 00:25:54.815 18:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.815 18:20:52 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:54.815 18:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.815 18:20:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.381 18:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.381 18:20:53 -- target/abort_qd_sizes.sh@62 -- # killprocess 90623 00:25:55.381 18:20:53 -- common/autotest_common.sh@926 -- # '[' -z 90623 ']' 00:25:55.381 18:20:53 -- common/autotest_common.sh@930 -- # kill -0 90623 00:25:55.381 18:20:53 -- common/autotest_common.sh@931 -- # uname 00:25:55.381 18:20:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.381 18:20:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90623 00:25:55.381 18:20:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:55.381 18:20:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:55.381 18:20:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90623' 00:25:55.381 killing process with pid 90623 00:25:55.381 18:20:53 -- common/autotest_common.sh@945 -- # kill 90623 00:25:55.381 18:20:53 -- common/autotest_common.sh@950 -- # wait 90623 00:25:55.381 00:25:55.381 real 0m10.638s 00:25:55.381 user 0m43.259s 00:25:55.381 sys 0m1.836s 00:25:55.381 18:20:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.381 18:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:55.381 ************************************ 00:25:55.381 END TEST spdk_target_abort 00:25:55.381 ************************************ 00:25:55.639 18:20:53 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:25:55.639 18:20:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:55.639 18:20:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.639 18:20:53 -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 ************************************ 00:25:55.639 START TEST kernel_target_abort 00:25:55.639 ************************************ 00:25:55.639 18:20:53 -- common/autotest_common.sh@1104 -- # kernel_target 00:25:55.639 18:20:53 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:25:55.639 18:20:53 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:25:55.639 18:20:53 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:25:55.639 18:20:53 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:25:55.639 18:20:53 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:25:55.639 18:20:53 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:25:55.639 18:20:53 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:55.639 18:20:53 -- nvmf/common.sh@627 -- # local block nvme 00:25:55.639 18:20:53 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:25:55.639 18:20:53 -- nvmf/common.sh@630 -- # modprobe nvmet 00:25:55.639 18:20:53 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:55.639 18:20:53 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:55.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:55.895 Waiting for block devices as requested 00:25:55.895 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:55.895 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.151 18:20:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:25:56.151 18:20:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:56.151 18:20:53 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:25:56.151 18:20:53 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:25:56.151 18:20:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:56.151 No valid GPT data, bailing 00:25:56.151 18:20:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:56.151 18:20:53 -- scripts/common.sh@393 -- # pt= 00:25:56.151 18:20:53 -- scripts/common.sh@394 -- # return 1 00:25:56.151 18:20:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:25:56.151 18:20:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:25:56.151 18:20:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:56.151 18:20:53 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:25:56.151 18:20:53 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:25:56.151 18:20:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:56.151 No valid GPT data, bailing 00:25:56.151 18:20:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:56.151 18:20:53 -- scripts/common.sh@393 -- # pt= 00:25:56.151 18:20:53 -- scripts/common.sh@394 -- # return 1 00:25:56.151 18:20:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:25:56.151 18:20:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:25:56.151 18:20:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:25:56.151 18:20:53 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:25:56.151 18:20:53 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:25:56.152 18:20:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:25:56.152 No valid GPT data, bailing 00:25:56.152 18:20:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:25:56.152 18:20:54 -- scripts/common.sh@393 -- # pt= 00:25:56.152 18:20:54 -- scripts/common.sh@394 -- # return 1 00:25:56.152 18:20:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:25:56.152 18:20:54 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:25:56.152 18:20:54 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:25:56.152 18:20:54 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:25:56.152 18:20:54 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:25:56.152 18:20:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:25:56.152 No valid GPT data, bailing 00:25:56.152 18:20:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:25:56.152 18:20:54 -- scripts/common.sh@393 -- # pt= 00:25:56.152 18:20:54 -- scripts/common.sh@394 -- # return 1 00:25:56.152 18:20:54 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:25:56.152 18:20:54 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:25:56.152 18:20:54 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:25:56.152 18:20:54 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:25:56.152 18:20:54 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:56.152 18:20:54 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:25:56.152 18:20:54 -- nvmf/common.sh@654 -- # echo 1 00:25:56.152 18:20:54 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:25:56.152 18:20:54 -- nvmf/common.sh@656 -- # echo 1 00:25:56.152 18:20:54 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:25:56.152 18:20:54 -- nvmf/common.sh@663 -- # echo tcp 00:25:56.152 18:20:54 -- nvmf/common.sh@664 -- # echo 4420 00:25:56.152 18:20:54 -- nvmf/common.sh@665 -- # echo ipv4 00:25:56.152 18:20:54 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:56.409 18:20:54 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b1b6de6e-7366-4f17-9e9b-43a9b7888b11 --hostid=b1b6de6e-7366-4f17-9e9b-43a9b7888b11 -a 10.0.0.1 -t tcp -s 4420 00:25:56.409 00:25:56.409 Discovery Log Number of Records 2, Generation counter 2 00:25:56.409 =====Discovery Log Entry 0====== 00:25:56.409 trtype: tcp 00:25:56.409 adrfam: ipv4 00:25:56.409 subtype: current discovery subsystem 00:25:56.410 treq: not specified, sq flow control disable supported 00:25:56.410 portid: 1 00:25:56.410 trsvcid: 4420 00:25:56.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:56.410 traddr: 10.0.0.1 00:25:56.410 eflags: none 00:25:56.410 sectype: none 00:25:56.410 =====Discovery Log Entry 1====== 00:25:56.410 trtype: tcp 00:25:56.410 adrfam: ipv4 00:25:56.410 subtype: nvme subsystem 00:25:56.410 treq: not specified, sq flow control disable supported 00:25:56.410 portid: 1 00:25:56.410 trsvcid: 4420 00:25:56.410 subnqn: kernel_target 00:25:56.410 traddr: 10.0.0.1 00:25:56.410 eflags: none 00:25:56.410 sectype: none 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:56.410 18:20:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:25:59.697 Initializing NVMe Controllers 00:25:59.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:25:59.697 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:25:59.697 Initialization complete. Launching workers. 00:25:59.697 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 32767, failed: 0 00:25:59.697 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32767, failed to submit 0 00:25:59.697 success 0, unsuccess 32767, failed 0 00:25:59.697 18:20:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:59.697 18:20:57 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:02.986 Initializing NVMe Controllers 00:26:02.986 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:02.986 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:02.986 Initialization complete. Launching workers. 00:26:02.986 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67211, failed: 0 00:26:02.986 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26818, failed to submit 40393 00:26:02.986 success 0, unsuccess 26818, failed 0 00:26:02.986 18:21:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:02.986 18:21:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:06.271 Initializing NVMe Controllers 00:26:06.271 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:06.271 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:06.271 Initialization complete. Launching workers. 00:26:06.271 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 71718, failed: 0 00:26:06.271 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17874, failed to submit 53844 00:26:06.271 success 0, unsuccess 17874, failed 0 00:26:06.271 18:21:03 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:06.271 18:21:03 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:06.271 18:21:03 -- nvmf/common.sh@677 -- # echo 0 00:26:06.271 18:21:03 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:06.271 18:21:03 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:06.271 18:21:03 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:06.271 18:21:03 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:06.271 18:21:03 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:06.271 18:21:03 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:06.271 00:26:06.271 real 0m10.332s 00:26:06.271 user 0m5.236s 00:26:06.271 sys 0m2.383s 00:26:06.271 18:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.271 18:21:03 -- common/autotest_common.sh@10 -- # set +x 00:26:06.271 ************************************ 00:26:06.271 END TEST kernel_target_abort 00:26:06.271 ************************************ 00:26:06.271 18:21:03 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:06.271 18:21:03 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:06.271 18:21:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:06.271 18:21:03 -- nvmf/common.sh@116 -- # sync 00:26:06.271 18:21:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:06.271 18:21:03 -- nvmf/common.sh@119 -- # set +e 00:26:06.271 18:21:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:06.271 18:21:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:06.271 rmmod nvme_tcp 00:26:06.271 rmmod nvme_fabrics 00:26:06.271 rmmod nvme_keyring 00:26:06.271 18:21:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:06.271 18:21:03 -- nvmf/common.sh@123 -- # set -e 00:26:06.271 18:21:03 -- nvmf/common.sh@124 -- # return 0 00:26:06.271 18:21:03 -- nvmf/common.sh@477 -- # '[' -n 90623 ']' 00:26:06.271 18:21:03 -- nvmf/common.sh@478 -- # killprocess 90623 00:26:06.271 18:21:03 -- common/autotest_common.sh@926 -- # '[' -z 90623 ']' 00:26:06.271 18:21:03 -- common/autotest_common.sh@930 -- # kill -0 90623 00:26:06.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (90623) - No such process 00:26:06.271 Process with pid 90623 is not found 00:26:06.271 18:21:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 90623 is not found' 00:26:06.271 18:21:03 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:06.271 18:21:03 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:06.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:06.787 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:06.787 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:06.787 18:21:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:06.787 18:21:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:06.787 18:21:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.787 18:21:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:06.787 18:21:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.788 18:21:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:06.788 18:21:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.788 18:21:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:06.788 00:26:06.788 real 0m24.372s 00:26:06.788 user 0m49.816s 00:26:06.788 sys 0m5.588s 00:26:06.788 18:21:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.788 18:21:04 -- common/autotest_common.sh@10 -- # set +x 00:26:06.788 ************************************ 00:26:06.788 END TEST nvmf_abort_qd_sizes 00:26:06.788 ************************************ 00:26:06.788 18:21:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:06.788 18:21:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:06.788 18:21:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:06.788 18:21:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:06.788 18:21:04 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:06.788 18:21:04 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:06.788 18:21:04 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:06.788 18:21:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:06.788 18:21:04 -- common/autotest_common.sh@10 -- # set +x 00:26:06.788 18:21:04 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:06.788 18:21:04 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:06.788 18:21:04 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:06.788 18:21:04 -- common/autotest_common.sh@10 -- # set +x 00:26:08.687 INFO: APP EXITING 00:26:08.687 INFO: killing all VMs 00:26:08.687 INFO: killing vhost app 00:26:08.687 INFO: EXIT DONE 00:26:08.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:09.203 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:09.203 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:09.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:09.770 Cleaning 00:26:09.770 Removing: /var/run/dpdk/spdk0/config 00:26:09.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:09.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:09.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:09.770 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:09.770 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:10.028 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:10.028 Removing: /var/run/dpdk/spdk1/config 00:26:10.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:10.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:10.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:10.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:10.028 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:10.028 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:10.028 Removing: /var/run/dpdk/spdk2/config 00:26:10.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:10.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:10.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:10.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:10.028 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:10.028 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:10.028 Removing: /var/run/dpdk/spdk3/config 00:26:10.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:10.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:10.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:10.028 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:10.028 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:10.028 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:10.028 Removing: /var/run/dpdk/spdk4/config 00:26:10.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:10.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:10.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:10.028 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:10.028 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:10.028 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:10.028 Removing: /dev/shm/nvmf_trace.0 00:26:10.028 Removing: /dev/shm/spdk_tgt_trace.pid55596 00:26:10.028 Removing: /var/run/dpdk/spdk0 00:26:10.028 Removing: /var/run/dpdk/spdk1 00:26:10.028 Removing: /var/run/dpdk/spdk2 00:26:10.028 Removing: /var/run/dpdk/spdk3 00:26:10.028 Removing: /var/run/dpdk/spdk4 00:26:10.028 Removing: /var/run/dpdk/spdk_pid55452 00:26:10.028 Removing: /var/run/dpdk/spdk_pid55596 00:26:10.028 Removing: /var/run/dpdk/spdk_pid55907 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56176 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56351 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56421 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56512 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56606 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56645 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56680 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56735 00:26:10.028 Removing: /var/run/dpdk/spdk_pid56847 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57471 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57535 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57604 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57632 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57711 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57739 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57818 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57846 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57903 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57933 00:26:10.028 Removing: /var/run/dpdk/spdk_pid57979 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58009 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58160 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58190 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58269 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58339 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58363 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58422 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58441 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58476 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58495 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58530 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58549 00:26:10.028 Removing: /var/run/dpdk/spdk_pid58584 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58603 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58638 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58657 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58692 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58717 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58746 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58766 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58800 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58820 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58854 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58874 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58908 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58928 00:26:10.029 Removing: /var/run/dpdk/spdk_pid58962 00:26:10.293 Removing: /var/run/dpdk/spdk_pid58982 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59016 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59036 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59072 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59092 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59126 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59146 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59186 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59200 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59240 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59254 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59294 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59323 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59355 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59383 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59421 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59440 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59480 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59500 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59535 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59604 00:26:10.293 Removing: /var/run/dpdk/spdk_pid59721 00:26:10.293 Removing: /var/run/dpdk/spdk_pid60136 00:26:10.293 Removing: /var/run/dpdk/spdk_pid66871 00:26:10.293 Removing: /var/run/dpdk/spdk_pid67212 00:26:10.293 Removing: /var/run/dpdk/spdk_pid68422 00:26:10.293 Removing: /var/run/dpdk/spdk_pid68796 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69053 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69101 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69359 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69361 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69419 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69477 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69538 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69576 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69583 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69609 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69647 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69653 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69713 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69771 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69831 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69869 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69877 00:26:10.293 Removing: /var/run/dpdk/spdk_pid69901 00:26:10.293 Removing: /var/run/dpdk/spdk_pid70190 00:26:10.293 Removing: /var/run/dpdk/spdk_pid70341 00:26:10.293 Removing: /var/run/dpdk/spdk_pid70598 00:26:10.293 Removing: /var/run/dpdk/spdk_pid70648 00:26:10.293 Removing: /var/run/dpdk/spdk_pid71031 00:26:10.293 Removing: /var/run/dpdk/spdk_pid71548 00:26:10.293 Removing: /var/run/dpdk/spdk_pid71980 00:26:10.293 Removing: /var/run/dpdk/spdk_pid72936 00:26:10.293 Removing: /var/run/dpdk/spdk_pid73918 00:26:10.293 Removing: /var/run/dpdk/spdk_pid74029 00:26:10.293 Removing: /var/run/dpdk/spdk_pid74098 00:26:10.293 Removing: /var/run/dpdk/spdk_pid75546 00:26:10.293 Removing: /var/run/dpdk/spdk_pid75780 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76217 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76322 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76475 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76519 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76560 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76606 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76769 00:26:10.293 Removing: /var/run/dpdk/spdk_pid76918 00:26:10.293 Removing: /var/run/dpdk/spdk_pid77181 00:26:10.293 Removing: /var/run/dpdk/spdk_pid77294 00:26:10.293 Removing: /var/run/dpdk/spdk_pid77710 00:26:10.293 Removing: /var/run/dpdk/spdk_pid78080 00:26:10.293 Removing: /var/run/dpdk/spdk_pid78088 00:26:10.293 Removing: /var/run/dpdk/spdk_pid80307 00:26:10.293 Removing: /var/run/dpdk/spdk_pid80608 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81105 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81107 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81443 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81463 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81477 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81508 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81518 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81657 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81659 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81766 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81769 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81878 00:26:10.293 Removing: /var/run/dpdk/spdk_pid81885 00:26:10.293 Removing: /var/run/dpdk/spdk_pid82312 00:26:10.293 Removing: /var/run/dpdk/spdk_pid82356 00:26:10.554 Removing: /var/run/dpdk/spdk_pid82435 00:26:10.554 Removing: /var/run/dpdk/spdk_pid82494 00:26:10.554 Removing: /var/run/dpdk/spdk_pid82828 00:26:10.554 Removing: /var/run/dpdk/spdk_pid83080 00:26:10.554 Removing: /var/run/dpdk/spdk_pid83566 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84118 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84586 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84677 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84750 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84840 00:26:10.554 Removing: /var/run/dpdk/spdk_pid84996 00:26:10.554 Removing: /var/run/dpdk/spdk_pid85083 00:26:10.554 Removing: /var/run/dpdk/spdk_pid85168 00:26:10.554 Removing: /var/run/dpdk/spdk_pid85267 00:26:10.554 Removing: /var/run/dpdk/spdk_pid85606 00:26:10.554 Removing: /var/run/dpdk/spdk_pid86299 00:26:10.554 Removing: /var/run/dpdk/spdk_pid87641 00:26:10.554 Removing: /var/run/dpdk/spdk_pid87841 00:26:10.554 Removing: /var/run/dpdk/spdk_pid88132 00:26:10.554 Removing: /var/run/dpdk/spdk_pid88429 00:26:10.554 Removing: /var/run/dpdk/spdk_pid88977 00:26:10.554 Removing: /var/run/dpdk/spdk_pid88982 00:26:10.554 Removing: /var/run/dpdk/spdk_pid89338 00:26:10.554 Removing: /var/run/dpdk/spdk_pid89491 00:26:10.554 Removing: /var/run/dpdk/spdk_pid89655 00:26:10.554 Removing: /var/run/dpdk/spdk_pid89751 00:26:10.554 Removing: /var/run/dpdk/spdk_pid89910 00:26:10.554 Removing: /var/run/dpdk/spdk_pid90019 00:26:10.554 Removing: /var/run/dpdk/spdk_pid90692 00:26:10.554 Removing: /var/run/dpdk/spdk_pid90727 00:26:10.554 Removing: /var/run/dpdk/spdk_pid90758 00:26:10.554 Removing: /var/run/dpdk/spdk_pid91004 00:26:10.554 Removing: /var/run/dpdk/spdk_pid91039 00:26:10.554 Removing: /var/run/dpdk/spdk_pid91069 00:26:10.554 Clean 00:26:10.554 killing process with pid 49648 00:26:10.554 killing process with pid 49649 00:26:10.554 18:21:08 -- common/autotest_common.sh@1436 -- # return 0 00:26:10.554 18:21:08 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:10.554 18:21:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:10.554 18:21:08 -- common/autotest_common.sh@10 -- # set +x 00:26:10.554 18:21:08 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:10.554 18:21:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:10.554 18:21:08 -- common/autotest_common.sh@10 -- # set +x 00:26:10.813 18:21:08 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:10.813 18:21:08 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:10.813 18:21:08 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:10.813 18:21:08 -- spdk/autotest.sh@394 -- # hash lcov 00:26:10.813 18:21:08 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:10.813 18:21:08 -- spdk/autotest.sh@396 -- # hostname 00:26:10.813 18:21:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:10.813 geninfo: WARNING: invalid characters removed from testname! 00:26:32.877 18:21:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:35.407 18:21:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:37.932 18:21:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:40.464 18:21:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:42.995 18:21:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:45.527 18:21:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:47.427 18:21:45 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:47.685 18:21:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.685 18:21:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:47.685 18:21:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.685 18:21:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.685 18:21:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.685 18:21:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.685 18:21:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.685 18:21:45 -- paths/export.sh@5 -- $ export PATH 00:26:47.685 18:21:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.685 18:21:45 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:47.685 18:21:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:26:47.685 18:21:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714069305.XXXXXX 00:26:47.685 18:21:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714069305.QQ8uNQ 00:26:47.685 18:21:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:26:47.685 18:21:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:26:47.685 18:21:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:47.685 18:21:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:47.685 18:21:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:47.685 18:21:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:26:47.685 18:21:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:26:47.685 18:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:26:47.685 18:21:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:26:47.685 18:21:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:47.685 18:21:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:47.685 18:21:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:47.685 18:21:45 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:26:47.685 18:21:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:47.685 18:21:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:47.685 18:21:45 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:47.685 18:21:45 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:47.685 18:21:45 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:47.685 18:21:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:47.685 + [[ -n 5135 ]] 00:26:47.685 + sudo kill 5135 00:26:47.695 [Pipeline] } 00:26:47.714 [Pipeline] // timeout 00:26:47.721 [Pipeline] } 00:26:47.739 [Pipeline] // stage 00:26:47.744 [Pipeline] } 00:26:47.762 [Pipeline] // catchError 00:26:47.771 [Pipeline] stage 00:26:47.773 [Pipeline] { (Stop VM) 00:26:47.788 [Pipeline] sh 00:26:48.067 + vagrant halt 00:26:51.364 ==> default: Halting domain... 00:26:57.940 [Pipeline] sh 00:26:58.219 + vagrant destroy -f 00:27:01.504 ==> default: Removing domain... 00:27:01.516 [Pipeline] sh 00:27:01.796 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:01.805 [Pipeline] } 00:27:01.822 [Pipeline] // stage 00:27:01.827 [Pipeline] } 00:27:01.843 [Pipeline] // dir 00:27:01.848 [Pipeline] } 00:27:01.864 [Pipeline] // wrap 00:27:01.870 [Pipeline] } 00:27:01.885 [Pipeline] // catchError 00:27:01.893 [Pipeline] stage 00:27:01.895 [Pipeline] { (Epilogue) 00:27:01.908 [Pipeline] sh 00:27:02.188 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:07.470 [Pipeline] catchError 00:27:07.471 [Pipeline] { 00:27:07.483 [Pipeline] sh 00:27:07.762 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:08.021 Artifacts sizes are good 00:27:08.031 [Pipeline] } 00:27:08.049 [Pipeline] // catchError 00:27:08.059 [Pipeline] archiveArtifacts 00:27:08.066 Archiving artifacts 00:27:08.236 [Pipeline] cleanWs 00:27:08.246 [WS-CLEANUP] Deleting project workspace... 00:27:08.246 [WS-CLEANUP] Deferred wipeout is used... 00:27:08.252 [WS-CLEANUP] done 00:27:08.254 [Pipeline] } 00:27:08.269 [Pipeline] // stage 00:27:08.274 [Pipeline] } 00:27:08.289 [Pipeline] // node 00:27:08.294 [Pipeline] End of Pipeline 00:27:08.329 Finished: SUCCESS